Test Report: Docker_Linux_crio 21997

                    
                      f52e7af1cf54d5c1b3af81f5f4f56bb8b0b6d6f9:2025-12-01:42595
                    
                

Test fail (48/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.26
44 TestAddons/parallel/Registry 16.49
45 TestAddons/parallel/RegistryCreds 0.4
46 TestAddons/parallel/Ingress 143.27
47 TestAddons/parallel/InspektorGadget 5.25
48 TestAddons/parallel/MetricsServer 5.31
50 TestAddons/parallel/CSI 53.54
51 TestAddons/parallel/Headlamp 2.51
52 TestAddons/parallel/CloudSpanner 5.26
53 TestAddons/parallel/LocalPath 8.11
54 TestAddons/parallel/NvidiaDevicePlugin 5.26
55 TestAddons/parallel/Yakd 5.26
56 TestAddons/parallel/AmdGpuDevicePlugin 5.26
106 TestFunctional/parallel/ServiceCmdConnect 602.89
140 TestFunctional/parallel/ServiceCmd/DeployApp 600.61
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.91
151 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
152 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.01
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
161 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
162 TestFunctional/parallel/ServiceCmd/Format 0.54
163 TestFunctional/parallel/ServiceCmd/URL 0.54
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 602.79
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 600.64
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 0.95
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.87
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.03
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.3
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.2
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.36
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.53
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.53
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.53
294 TestJSONOutput/pause/Command 2.07
300 TestJSONOutput/unpause/Command 1.87
395 TestPause/serial/Pause 5.79
449 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.47
456 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.23
459 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.2
467 TestStartStop/group/old-k8s-version/serial/Pause 5.78
470 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.39
478 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.57
483 TestStartStop/group/no-preload/serial/Pause 5.86
488 TestStartStop/group/embed-certs/serial/Pause 6.72
492 TestStartStop/group/newest-cni/serial/Pause 6.04
496 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.06
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable volcano --alsologtostderr -v=1: exit status 11 (255.830717ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:08:00.744354   26495 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:08:00.744513   26495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:00.744523   26495 out.go:374] Setting ErrFile to fd 2...
	I1201 19:08:00.744527   26495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:00.744732   26495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:08:00.744987   26495 mustload.go:66] Loading cluster: addons-844427
	I1201 19:08:00.745329   26495 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:00.745348   26495 addons.go:622] checking whether the cluster is paused
	I1201 19:08:00.745458   26495 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:00.745476   26495 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:08:00.745935   26495 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:08:00.765625   26495 ssh_runner.go:195] Run: systemctl --version
	I1201 19:08:00.765691   26495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:08:00.783860   26495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:08:00.881795   26495 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:08:00.881864   26495 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:08:00.910094   26495 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:08:00.910115   26495 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:08:00.910121   26495 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:08:00.910124   26495 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:08:00.910127   26495 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:08:00.910131   26495 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:08:00.910136   26495 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:08:00.910140   26495 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:08:00.910144   26495 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:08:00.910152   26495 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:08:00.910157   26495 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:08:00.910161   26495 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:08:00.910166   26495 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:08:00.910175   26495 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:08:00.910180   26495 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:08:00.910190   26495 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:08:00.910197   26495 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:08:00.910204   26495 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:08:00.910208   26495 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:08:00.910213   26495 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:08:00.910222   26495 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:08:00.910233   26495 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:08:00.910241   26495 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:08:00.910260   26495 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:08:00.910264   26495 cri.go:89] found id: ""
	I1201 19:08:00.910335   26495 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:08:00.924663   26495 out.go:203] 
	W1201 19:08:00.926197   26495 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:08:00.926215   26495 out.go:285] * 
	* 
	W1201 19:08:00.929145   26495 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:08:00.930729   26495 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.058653ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-g722r" [aab1ac21-3d9b-432a-9c79-77419a1e6c3e] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003354646s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-q7742" [f6fe9017-d264-4a76-a4d4-9947815e6804] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00273979s
addons_test.go:392: (dbg) Run:  kubectl --context addons-844427 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-844427 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-844427 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.985912369s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 ip
2025/12/01 19:08:27 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable registry --alsologtostderr -v=1: exit status 11 (240.768859ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:08:27.076105   29530 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:08:27.076413   29530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:27.076423   29530 out.go:374] Setting ErrFile to fd 2...
	I1201 19:08:27.076427   29530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:27.076650   29530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:08:27.076953   29530 mustload.go:66] Loading cluster: addons-844427
	I1201 19:08:27.077318   29530 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:27.077343   29530 addons.go:622] checking whether the cluster is paused
	I1201 19:08:27.077423   29530 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:27.077439   29530 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:08:27.077795   29530 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:08:27.095600   29530 ssh_runner.go:195] Run: systemctl --version
	I1201 19:08:27.095662   29530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:08:27.113035   29530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:08:27.210802   29530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:08:27.210882   29530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:08:27.241124   29530 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:08:27.241150   29530 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:08:27.241154   29530 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:08:27.241157   29530 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:08:27.241160   29530 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:08:27.241163   29530 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:08:27.241166   29530 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:08:27.241169   29530 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:08:27.241172   29530 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:08:27.241177   29530 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:08:27.241179   29530 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:08:27.241183   29530 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:08:27.241185   29530 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:08:27.241188   29530 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:08:27.241191   29530 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:08:27.241198   29530 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:08:27.241201   29530 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:08:27.241206   29530 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:08:27.241208   29530 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:08:27.241211   29530 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:08:27.241213   29530 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:08:27.241216   29530 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:08:27.241219   29530 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:08:27.241224   29530 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:08:27.241229   29530 cri.go:89] found id: ""
	I1201 19:08:27.241263   29530 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:08:27.255212   29530 out.go:203] 
	W1201 19:08:27.256500   29530 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:08:27.256522   29530 out.go:285] * 
	* 
	W1201 19:08:27.259497   29530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:08:27.260702   29530 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.49s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.140616ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-844427
addons_test.go:332: (dbg) Run:  kubectl --context addons-844427 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (247.432964ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:08:27.479572   29615 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:08:27.479887   29615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:27.479898   29615 out.go:374] Setting ErrFile to fd 2...
	I1201 19:08:27.479901   29615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:27.480129   29615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:08:27.480400   29615 mustload.go:66] Loading cluster: addons-844427
	I1201 19:08:27.480799   29615 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:27.480825   29615 addons.go:622] checking whether the cluster is paused
	I1201 19:08:27.480945   29615 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:27.480961   29615 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:08:27.481455   29615 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:08:27.500984   29615 ssh_runner.go:195] Run: systemctl --version
	I1201 19:08:27.501043   29615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:08:27.518593   29615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:08:27.616535   29615 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:08:27.616606   29615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:08:27.645463   29615 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:08:27.645498   29615 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:08:27.645502   29615 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:08:27.645506   29615 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:08:27.645509   29615 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:08:27.645514   29615 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:08:27.645516   29615 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:08:27.645519   29615 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:08:27.645522   29615 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:08:27.645532   29615 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:08:27.645535   29615 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:08:27.645538   29615 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:08:27.645541   29615 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:08:27.645544   29615 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:08:27.645547   29615 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:08:27.645557   29615 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:08:27.645565   29615 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:08:27.645569   29615 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:08:27.645572   29615 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:08:27.645575   29615 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:08:27.645577   29615 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:08:27.645580   29615 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:08:27.645583   29615 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:08:27.645585   29615 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:08:27.645588   29615 cri.go:89] found id: ""
	I1201 19:08:27.645636   29615 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:08:27.659598   29615 out.go:203] 
	W1201 19:08:27.660745   29615 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:08:27.660764   29615 out.go:285] * 
	* 
	W1201 19:08:27.663779   29615 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:08:27.665002   29615 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-844427 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-844427 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-844427 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [c8692dac-25bd-43a9-b7b6-d090974d4cc4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [c8692dac-25bd-43a9-b7b6-d090974d4cc4] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.038053012s
I1201 19:08:28.871414   16873 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.729522477s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-844427 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-844427
helpers_test.go:243: (dbg) docker inspect addons-844427:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13",
	        "Created": "2025-12-01T19:06:21.064128042Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19295,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T19:06:21.096067188Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13/hostname",
	        "HostsPath": "/var/lib/docker/containers/7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13/hosts",
	        "LogPath": "/var/lib/docker/containers/7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13/7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13-json.log",
	        "Name": "/addons-844427",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-844427:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-844427",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13",
	                "LowerDir": "/var/lib/docker/overlay2/023709fae24e3caaa3f947705049d04de1d3be5d4edbe25c0e28164a1aa1c1b3-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/023709fae24e3caaa3f947705049d04de1d3be5d4edbe25c0e28164a1aa1c1b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/023709fae24e3caaa3f947705049d04de1d3be5d4edbe25c0e28164a1aa1c1b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/023709fae24e3caaa3f947705049d04de1d3be5d4edbe25c0e28164a1aa1c1b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-844427",
	                "Source": "/var/lib/docker/volumes/addons-844427/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-844427",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-844427",
	                "name.minikube.sigs.k8s.io": "addons-844427",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d80dd366e8d03cdc0a1ffc3bed3384f1926667f9916154cdc91fe88cd863e7db",
	            "SandboxKey": "/var/run/docker/netns/d80dd366e8d0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-844427": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef8689c1e4ee442dd8327401d7cb77b76fb85fe450034dbb88cb010ecfdb389c",
	                    "EndpointID": "9fbbcd729ac40848418db03bba1b61c5588ccf7fabc589afcbdd667e865dd380",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "9a:b6:c6:da:70:21",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-844427",
	                        "7984c52a63dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-844427 -n addons-844427
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-844427 logs -n 25: (1.096426419s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-325932 --alsologtostderr --binary-mirror http://127.0.0.1:37241 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-325932 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ delete  │ -p binary-mirror-325932                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-325932 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ addons  │ enable dashboard -p addons-844427                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ addons  │ disable dashboard -p addons-844427                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ start   │ -p addons-844427 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-844427 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ addons-844427 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ enable headlamp -p addons-844427 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ addons-844427 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ addons-844427 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ addons-844427 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ ssh     │ addons-844427 ssh cat /opt/local-path-provisioner/pvc-151ebd6f-1249-4e0a-b7bb-e835b33c9271_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-844427 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ addons-844427 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ addons-844427 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ addons-844427 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ ip      │ addons-844427 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-844427 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-844427                                                                                                                                                                                                                                                                                                                                                                                           │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-844427 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ ssh     │ addons-844427 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ addons-844427 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ addons-844427 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:09 UTC │                     │
	│ addons  │ addons-844427 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:09 UTC │                     │
	│ ip      │ addons-844427 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-844427        │ jenkins │ v1.37.0 │ 01 Dec 25 19:10 UTC │ 01 Dec 25 19:10 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 19:05:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 19:05:59.446368   18652 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:05:59.446454   18652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:59.446458   18652 out.go:374] Setting ErrFile to fd 2...
	I1201 19:05:59.446462   18652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:59.446642   18652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:05:59.447095   18652 out.go:368] Setting JSON to false
	I1201 19:05:59.447853   18652 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2910,"bootTime":1764613049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:05:59.447914   18652 start.go:143] virtualization: kvm guest
	I1201 19:05:59.449739   18652 out.go:179] * [addons-844427] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:05:59.450886   18652 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:05:59.450912   18652 notify.go:221] Checking for updates...
	I1201 19:05:59.453373   18652 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:05:59.454620   18652 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:05:59.455794   18652 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 19:05:59.456926   18652 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:05:59.458060   18652 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:05:59.459240   18652 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:05:59.483493   18652 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 19:05:59.483582   18652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:05:59.539071   18652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-01 19:05:59.530206961 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:05:59.539180   18652 docker.go:319] overlay module found
	I1201 19:05:59.540936   18652 out.go:179] * Using the docker driver based on user configuration
	I1201 19:05:59.542113   18652 start.go:309] selected driver: docker
	I1201 19:05:59.542127   18652 start.go:927] validating driver "docker" against <nil>
	I1201 19:05:59.542138   18652 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:05:59.542666   18652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:05:59.596717   18652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-01 19:05:59.587916678 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:05:59.596854   18652 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 19:05:59.597035   18652 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 19:05:59.598694   18652 out.go:179] * Using Docker driver with root privileges
	I1201 19:05:59.599904   18652 cni.go:84] Creating CNI manager for ""
	I1201 19:05:59.599958   18652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 19:05:59.599968   18652 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1201 19:05:59.600028   18652 start.go:353] cluster config:
	{Name:addons-844427 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-844427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1201 19:05:59.601239   18652 out.go:179] * Starting "addons-844427" primary control-plane node in "addons-844427" cluster
	I1201 19:05:59.602316   18652 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 19:05:59.603457   18652 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 19:05:59.604531   18652 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 19:05:59.604561   18652 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 19:05:59.604571   18652 cache.go:65] Caching tarball of preloaded images
	I1201 19:05:59.604609   18652 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 19:05:59.604658   18652 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 19:05:59.604673   18652 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 19:05:59.604998   18652 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/config.json ...
	I1201 19:05:59.605025   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/config.json: {Name:mkc49c9a3396671097648e11753d3c1d4f182d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:59.620629   18652 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1201 19:05:59.620742   18652 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1201 19:05:59.620758   18652 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1201 19:05:59.620763   18652 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1201 19:05:59.620769   18652 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1201 19:05:59.620776   18652 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1201 19:06:13.305751   18652 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1201 19:06:13.305803   18652 cache.go:243] Successfully downloaded all kic artifacts
	I1201 19:06:13.305845   18652 start.go:360] acquireMachinesLock for addons-844427: {Name:mk144e573f21904e0704a69cb6c835a66d7023b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 19:06:13.305974   18652 start.go:364] duration metric: took 104.638µs to acquireMachinesLock for "addons-844427"
	I1201 19:06:13.306009   18652 start.go:93] Provisioning new machine with config: &{Name:addons-844427 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-844427 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 19:06:13.306128   18652 start.go:125] createHost starting for "" (driver="docker")
	I1201 19:06:13.308869   18652 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1201 19:06:13.309103   18652 start.go:159] libmachine.API.Create for "addons-844427" (driver="docker")
	I1201 19:06:13.309152   18652 client.go:173] LocalClient.Create starting
	I1201 19:06:13.309260   18652 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem
	I1201 19:06:13.346411   18652 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem
	I1201 19:06:13.438320   18652 cli_runner.go:164] Run: docker network inspect addons-844427 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1201 19:06:13.456233   18652 cli_runner.go:211] docker network inspect addons-844427 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1201 19:06:13.456323   18652 network_create.go:284] running [docker network inspect addons-844427] to gather additional debugging logs...
	I1201 19:06:13.456341   18652 cli_runner.go:164] Run: docker network inspect addons-844427
	W1201 19:06:13.472025   18652 cli_runner.go:211] docker network inspect addons-844427 returned with exit code 1
	I1201 19:06:13.472059   18652 network_create.go:287] error running [docker network inspect addons-844427]: docker network inspect addons-844427: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-844427 not found
	I1201 19:06:13.472076   18652 network_create.go:289] output of [docker network inspect addons-844427]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-844427 not found
	
	** /stderr **
	I1201 19:06:13.472162   18652 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 19:06:13.488033   18652 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e1ae30}
	I1201 19:06:13.488079   18652 network_create.go:124] attempt to create docker network addons-844427 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1201 19:06:13.488137   18652 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-844427 addons-844427
	I1201 19:06:13.531955   18652 network_create.go:108] docker network addons-844427 192.168.49.0/24 created
	I1201 19:06:13.531982   18652 kic.go:121] calculated static IP "192.168.49.2" for the "addons-844427" container
	I1201 19:06:13.532058   18652 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1201 19:06:13.547549   18652 cli_runner.go:164] Run: docker volume create addons-844427 --label name.minikube.sigs.k8s.io=addons-844427 --label created_by.minikube.sigs.k8s.io=true
	I1201 19:06:13.564252   18652 oci.go:103] Successfully created a docker volume addons-844427
	I1201 19:06:13.564382   18652 cli_runner.go:164] Run: docker run --rm --name addons-844427-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-844427 --entrypoint /usr/bin/test -v addons-844427:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1201 19:06:17.186503   18652 cli_runner.go:217] Completed: docker run --rm --name addons-844427-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-844427 --entrypoint /usr/bin/test -v addons-844427:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (3.622069758s)
	I1201 19:06:17.186545   18652 oci.go:107] Successfully prepared a docker volume addons-844427
	I1201 19:06:17.186597   18652 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 19:06:17.186609   18652 kic.go:194] Starting extracting preloaded images to volume ...
	I1201 19:06:17.186653   18652 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-844427:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1201 19:06:20.990991   18652 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-844427:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.804282602s)
	I1201 19:06:20.991021   18652 kic.go:203] duration metric: took 3.804409241s to extract preloaded images to volume ...
	W1201 19:06:20.991138   18652 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1201 19:06:20.991182   18652 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1201 19:06:20.991218   18652 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1201 19:06:21.048768   18652 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-844427 --name addons-844427 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-844427 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-844427 --network addons-844427 --ip 192.168.49.2 --volume addons-844427:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1201 19:06:21.334086   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Running}}
	I1201 19:06:21.353554   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:21.372742   18652 cli_runner.go:164] Run: docker exec addons-844427 stat /var/lib/dpkg/alternatives/iptables
	I1201 19:06:21.417875   18652 oci.go:144] the created container "addons-844427" has a running status.
	I1201 19:06:21.417900   18652 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa...
	I1201 19:06:21.489966   18652 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1201 19:06:21.516576   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:21.533912   18652 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1201 19:06:21.533934   18652 kic_runner.go:114] Args: [docker exec --privileged addons-844427 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1201 19:06:21.609136   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:21.633330   18652 machine.go:94] provisionDockerMachine start ...
	I1201 19:06:21.633451   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:21.656478   18652 main.go:143] libmachine: Using SSH client type: native
	I1201 19:06:21.656786   18652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1201 19:06:21.656804   18652 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 19:06:21.799650   18652 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-844427
	
	I1201 19:06:21.799686   18652 ubuntu.go:182] provisioning hostname "addons-844427"
	I1201 19:06:21.799761   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:21.819197   18652 main.go:143] libmachine: Using SSH client type: native
	I1201 19:06:21.819455   18652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1201 19:06:21.819476   18652 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-844427 && echo "addons-844427" | sudo tee /etc/hostname
	I1201 19:06:21.967902   18652 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-844427
	
	I1201 19:06:21.967979   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:21.986364   18652 main.go:143] libmachine: Using SSH client type: native
	I1201 19:06:21.986577   18652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1201 19:06:21.986593   18652 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-844427' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-844427/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-844427' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 19:06:22.123679   18652 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 19:06:22.123708   18652 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 19:06:22.123732   18652 ubuntu.go:190] setting up certificates
	I1201 19:06:22.123740   18652 provision.go:84] configureAuth start
	I1201 19:06:22.123783   18652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-844427
	I1201 19:06:22.140560   18652 provision.go:143] copyHostCerts
	I1201 19:06:22.140621   18652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 19:06:22.140741   18652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 19:06:22.140846   18652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 19:06:22.140924   18652 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.addons-844427 san=[127.0.0.1 192.168.49.2 addons-844427 localhost minikube]
	I1201 19:06:22.215513   18652 provision.go:177] copyRemoteCerts
	I1201 19:06:22.215562   18652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 19:06:22.215612   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:22.232480   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:22.330363   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 19:06:22.348248   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1201 19:06:22.364652   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 19:06:22.380460   18652 provision.go:87] duration metric: took 256.709149ms to configureAuth
	I1201 19:06:22.380481   18652 ubuntu.go:206] setting minikube options for container-runtime
	I1201 19:06:22.380657   18652 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:06:22.380780   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:22.397095   18652 main.go:143] libmachine: Using SSH client type: native
	I1201 19:06:22.397395   18652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1201 19:06:22.397424   18652 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 19:06:22.668591   18652 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 19:06:22.668611   18652 machine.go:97] duration metric: took 1.035257294s to provisionDockerMachine
	I1201 19:06:22.668622   18652 client.go:176] duration metric: took 9.359459963s to LocalClient.Create
	I1201 19:06:22.668662   18652 start.go:167] duration metric: took 9.359560986s to libmachine.API.Create "addons-844427"
	I1201 19:06:22.668670   18652 start.go:293] postStartSetup for "addons-844427" (driver="docker")
	I1201 19:06:22.668679   18652 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 19:06:22.668723   18652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 19:06:22.668764   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:22.685615   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:22.785105   18652 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 19:06:22.788465   18652 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 19:06:22.788489   18652 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 19:06:22.788499   18652 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 19:06:22.788547   18652 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 19:06:22.788569   18652 start.go:296] duration metric: took 119.89335ms for postStartSetup
	I1201 19:06:22.788859   18652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-844427
	I1201 19:06:22.805467   18652 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/config.json ...
	I1201 19:06:22.805700   18652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 19:06:22.805736   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:22.822672   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:22.918180   18652 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 19:06:22.922441   18652 start.go:128] duration metric: took 9.616299155s to createHost
	I1201 19:06:22.922465   18652 start.go:83] releasing machines lock for "addons-844427", held for 9.6164769s
	I1201 19:06:22.922523   18652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-844427
	I1201 19:06:22.939414   18652 ssh_runner.go:195] Run: cat /version.json
	I1201 19:06:22.939453   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:22.939537   18652 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 19:06:22.939633   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:22.957222   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:22.958034   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:23.051154   18652 ssh_runner.go:195] Run: systemctl --version
	I1201 19:06:23.102031   18652 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 19:06:23.136493   18652 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 19:06:23.140917   18652 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 19:06:23.140968   18652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 19:06:23.166171   18652 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1201 19:06:23.166193   18652 start.go:496] detecting cgroup driver to use...
	I1201 19:06:23.166225   18652 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 19:06:23.166269   18652 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 19:06:23.181711   18652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 19:06:23.193822   18652 docker.go:218] disabling cri-docker service (if available) ...
	I1201 19:06:23.193881   18652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 19:06:23.210578   18652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 19:06:23.226846   18652 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 19:06:23.302480   18652 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 19:06:23.389350   18652 docker.go:234] disabling docker service ...
	I1201 19:06:23.389418   18652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 19:06:23.406722   18652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 19:06:23.418393   18652 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 19:06:23.499436   18652 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 19:06:23.575012   18652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 19:06:23.586704   18652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 19:06:23.599572   18652 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 19:06:23.599632   18652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.608878   18652 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 19:06:23.608925   18652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.616810   18652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.624791   18652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.633097   18652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 19:06:23.640485   18652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.648489   18652 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.660819   18652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.669095   18652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 19:06:23.676004   18652 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1201 19:06:23.676066   18652 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1201 19:06:23.687274   18652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 19:06:23.694680   18652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 19:06:23.772279   18652 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 19:06:23.900576   18652 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 19:06:23.900636   18652 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 19:06:23.904311   18652 start.go:564] Will wait 60s for crictl version
	I1201 19:06:23.904364   18652 ssh_runner.go:195] Run: which crictl
	I1201 19:06:23.907668   18652 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 19:06:23.929764   18652 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 19:06:23.929876   18652 ssh_runner.go:195] Run: crio --version
	I1201 19:06:23.956668   18652 ssh_runner.go:195] Run: crio --version
	I1201 19:06:23.984683   18652 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 19:06:23.985858   18652 cli_runner.go:164] Run: docker network inspect addons-844427 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 19:06:24.002507   18652 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 19:06:24.006557   18652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 19:06:24.016379   18652 kubeadm.go:884] updating cluster {Name:addons-844427 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-844427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 19:06:24.016494   18652 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 19:06:24.016545   18652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 19:06:24.047539   18652 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 19:06:24.047574   18652 crio.go:433] Images already preloaded, skipping extraction
	I1201 19:06:24.047635   18652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 19:06:24.072232   18652 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 19:06:24.072254   18652 cache_images.go:86] Images are preloaded, skipping loading
	I1201 19:06:24.072262   18652 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1201 19:06:24.072374   18652 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-844427 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-844427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 19:06:24.072447   18652 ssh_runner.go:195] Run: crio config
	I1201 19:06:24.115614   18652 cni.go:84] Creating CNI manager for ""
	I1201 19:06:24.115638   18652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 19:06:24.115657   18652 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 19:06:24.115684   18652 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-844427 NodeName:addons-844427 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 19:06:24.115838   18652 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-844427"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 19:06:24.115915   18652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 19:06:24.124032   18652 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 19:06:24.124092   18652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 19:06:24.132114   18652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1201 19:06:24.144519   18652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 19:06:24.159436   18652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1201 19:06:24.171715   18652 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 19:06:24.175327   18652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 19:06:24.184977   18652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 19:06:24.262546   18652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 19:06:24.281523   18652 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427 for IP: 192.168.49.2
	I1201 19:06:24.281546   18652 certs.go:195] generating shared ca certs ...
	I1201 19:06:24.281567   18652 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.281702   18652 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 19:06:24.323486   18652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt ...
	I1201 19:06:24.323514   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt: {Name:mke2dc2bda082d7cec68c315ca42d5e315f550a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.323706   18652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key ...
	I1201 19:06:24.323720   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key: {Name:mkc0680ae5c06e9f83eb9436d2f7fc0a150e26bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.323818   18652 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 19:06:24.400383   18652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt ...
	I1201 19:06:24.400409   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt: {Name:mk09a8428296dedb7a269a80e7a3b1792e56a101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.400568   18652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key ...
	I1201 19:06:24.400579   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key: {Name:mk00840924daeb47b43c83d2f1f1f2e8f48beaa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.400646   18652 certs.go:257] generating profile certs ...
	I1201 19:06:24.400696   18652 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.key
	I1201 19:06:24.400710   18652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt with IP's: []
	I1201 19:06:24.463121   18652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt ...
	I1201 19:06:24.463145   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: {Name:mk2ebc2b87627b12e31a7751c9c82dd1b2ec20df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.463307   18652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.key ...
	I1201 19:06:24.463320   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.key: {Name:mk2dea41988683e567eb325458cbbc7b09e11e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.463399   18652 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.key.40ffae2e
	I1201 19:06:24.463418   18652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.crt.40ffae2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1201 19:06:24.602658   18652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.crt.40ffae2e ...
	I1201 19:06:24.602685   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.crt.40ffae2e: {Name:mkc2344faacd89d4d0688f6c77f1919afa037ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.602844   18652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.key.40ffae2e ...
	I1201 19:06:24.602857   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.key.40ffae2e: {Name:mk7fa3f4e2e4088d9c5aaded46e27351b455ac2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.602929   18652 certs.go:382] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.crt.40ffae2e -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.crt
	I1201 19:06:24.603002   18652 certs.go:386] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.key.40ffae2e -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.key
	I1201 19:06:24.603056   18652 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.key
	I1201 19:06:24.603073   18652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.crt with IP's: []
	I1201 19:06:24.822280   18652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.crt ...
	I1201 19:06:24.822316   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.crt: {Name:mk8c371d73674b41864e81a157290f8bd3fe3d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.822482   18652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.key ...
	I1201 19:06:24.822493   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.key: {Name:mke42070f335d652deae6f54cd0d19f5d1b18e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.822661   18652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 19:06:24.822697   18652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 19:06:24.822723   18652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 19:06:24.822746   18652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 19:06:24.823276   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 19:06:24.840677   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 19:06:24.856545   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 19:06:24.872903   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 19:06:24.889175   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1201 19:06:24.905406   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 19:06:24.921714   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 19:06:24.938140   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 19:06:24.954324   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 19:06:24.971951   18652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 19:06:24.983889   18652 ssh_runner.go:195] Run: openssl version
	I1201 19:06:24.989774   18652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 19:06:25.000573   18652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 19:06:25.004437   18652 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 19:06:25.004488   18652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 19:06:25.040395   18652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 19:06:25.049542   18652 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 19:06:25.053424   18652 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 19:06:25.053477   18652 kubeadm.go:401] StartCluster: {Name:addons-844427 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-844427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:06:25.053542   18652 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:06:25.053595   18652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:06:25.082295   18652 cri.go:89] found id: ""
	I1201 19:06:25.082354   18652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 19:06:25.090170   18652 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 19:06:25.097732   18652 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 19:06:25.097774   18652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 19:06:25.105123   18652 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 19:06:25.105146   18652 kubeadm.go:158] found existing configuration files:
	
	I1201 19:06:25.105192   18652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 19:06:25.112518   18652 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 19:06:25.112566   18652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 19:06:25.119477   18652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 19:06:25.126702   18652 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 19:06:25.126749   18652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 19:06:25.134461   18652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 19:06:25.141943   18652 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 19:06:25.141999   18652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 19:06:25.149147   18652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 19:06:25.157011   18652 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 19:06:25.157085   18652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 19:06:25.164296   18652 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 19:06:25.220808   18652 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1201 19:06:25.275084   18652 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 19:06:35.124038   18652 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1201 19:06:35.124110   18652 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 19:06:35.124231   18652 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 19:06:35.124332   18652 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1201 19:06:35.124396   18652 kubeadm.go:319] OS: Linux
	I1201 19:06:35.124460   18652 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 19:06:35.124528   18652 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 19:06:35.124601   18652 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 19:06:35.124669   18652 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 19:06:35.124737   18652 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 19:06:35.124786   18652 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 19:06:35.124832   18652 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 19:06:35.124870   18652 kubeadm.go:319] CGROUPS_IO: enabled
	I1201 19:06:35.124947   18652 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 19:06:35.125030   18652 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 19:06:35.125102   18652 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 19:06:35.125162   18652 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 19:06:35.126720   18652 out.go:252]   - Generating certificates and keys ...
	I1201 19:06:35.126786   18652 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 19:06:35.126841   18652 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 19:06:35.126896   18652 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 19:06:35.126950   18652 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 19:06:35.127005   18652 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 19:06:35.127061   18652 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 19:06:35.127112   18652 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 19:06:35.127271   18652 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-844427 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1201 19:06:35.127370   18652 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 19:06:35.127527   18652 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-844427 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1201 19:06:35.127586   18652 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 19:06:35.127641   18652 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 19:06:35.127679   18652 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 19:06:35.127724   18652 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 19:06:35.127766   18652 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 19:06:35.127811   18652 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 19:06:35.127853   18652 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 19:06:35.127928   18652 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 19:06:35.127993   18652 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 19:06:35.128102   18652 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 19:06:35.128170   18652 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 19:06:35.129469   18652 out.go:252]   - Booting up control plane ...
	I1201 19:06:35.129547   18652 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 19:06:35.129612   18652 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 19:06:35.129666   18652 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 19:06:35.129771   18652 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 19:06:35.129858   18652 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 19:06:35.129942   18652 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 19:06:35.130028   18652 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 19:06:35.130074   18652 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 19:06:35.130182   18652 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 19:06:35.130278   18652 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 19:06:35.130343   18652 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001095894s
	I1201 19:06:35.130416   18652 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 19:06:35.130491   18652 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1201 19:06:35.130567   18652 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 19:06:35.130639   18652 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 19:06:35.130699   18652 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.723809939s
	I1201 19:06:35.130757   18652 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.018008917s
	I1201 19:06:35.130810   18652 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.500986328s
	I1201 19:06:35.130929   18652 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1201 19:06:35.131047   18652 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1201 19:06:35.131103   18652 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1201 19:06:35.131273   18652 kubeadm.go:319] [mark-control-plane] Marking the node addons-844427 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1201 19:06:35.131365   18652 kubeadm.go:319] [bootstrap-token] Using token: gyuws6.vlonq0lhcrfslwtv
	I1201 19:06:35.133603   18652 out.go:252]   - Configuring RBAC rules ...
	I1201 19:06:35.133692   18652 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1201 19:06:35.133776   18652 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1201 19:06:35.133901   18652 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1201 19:06:35.134019   18652 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1201 19:06:35.134122   18652 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1201 19:06:35.134208   18652 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1201 19:06:35.134314   18652 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1201 19:06:35.134352   18652 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1201 19:06:35.134397   18652 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1201 19:06:35.134405   18652 kubeadm.go:319] 
	I1201 19:06:35.134458   18652 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1201 19:06:35.134464   18652 kubeadm.go:319] 
	I1201 19:06:35.134536   18652 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1201 19:06:35.134542   18652 kubeadm.go:319] 
	I1201 19:06:35.134562   18652 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1201 19:06:35.134613   18652 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1201 19:06:35.134657   18652 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1201 19:06:35.134665   18652 kubeadm.go:319] 
	I1201 19:06:35.134715   18652 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1201 19:06:35.134720   18652 kubeadm.go:319] 
	I1201 19:06:35.134763   18652 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1201 19:06:35.134769   18652 kubeadm.go:319] 
	I1201 19:06:35.134812   18652 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1201 19:06:35.134882   18652 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1201 19:06:35.134940   18652 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1201 19:06:35.134946   18652 kubeadm.go:319] 
	I1201 19:06:35.135023   18652 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1201 19:06:35.135096   18652 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1201 19:06:35.135107   18652 kubeadm.go:319] 
	I1201 19:06:35.135203   18652 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token gyuws6.vlonq0lhcrfslwtv \
	I1201 19:06:35.135318   18652 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:06dc444a5ab7086c8ab7763a4467d2ea9bcadb3b2da24d1940c0cca14b3cdc8a \
	I1201 19:06:35.135344   18652 kubeadm.go:319] 	--control-plane 
	I1201 19:06:35.135348   18652 kubeadm.go:319] 
	I1201 19:06:35.135420   18652 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1201 19:06:35.135427   18652 kubeadm.go:319] 
	I1201 19:06:35.135490   18652 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token gyuws6.vlonq0lhcrfslwtv \
	I1201 19:06:35.135585   18652 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:06dc444a5ab7086c8ab7763a4467d2ea9bcadb3b2da24d1940c0cca14b3cdc8a 
	I1201 19:06:35.135598   18652 cni.go:84] Creating CNI manager for ""
	I1201 19:06:35.135607   18652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 19:06:35.137112   18652 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1201 19:06:35.138402   18652 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1201 19:06:35.142762   18652 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1201 19:06:35.142780   18652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1201 19:06:35.155780   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1201 19:06:35.356084   18652 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1201 19:06:35.356168   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:35.356196   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-844427 minikube.k8s.io/updated_at=2025_12_01T19_06_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9 minikube.k8s.io/name=addons-844427 minikube.k8s.io/primary=true
	I1201 19:06:35.367447   18652 ops.go:34] apiserver oom_adj: -16
	I1201 19:06:35.437771   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:35.937905   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:36.438209   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:36.938520   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:37.438660   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:37.938270   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:38.438501   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:38.938719   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:39.438877   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:39.938800   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:40.009185   18652 kubeadm.go:1114] duration metric: took 4.653078497s to wait for elevateKubeSystemPrivileges
	I1201 19:06:40.009222   18652 kubeadm.go:403] duration metric: took 14.9557493s to StartCluster
	I1201 19:06:40.009242   18652 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:40.009414   18652 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:06:40.009838   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:40.010033   18652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1201 19:06:40.010043   18652 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 19:06:40.010132   18652 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1201 19:06:40.010253   18652 addons.go:70] Setting yakd=true in profile "addons-844427"
	I1201 19:06:40.010261   18652 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:06:40.010276   18652 addons.go:239] Setting addon yakd=true in "addons-844427"
	I1201 19:06:40.010278   18652 addons.go:70] Setting inspektor-gadget=true in profile "addons-844427"
	I1201 19:06:40.010309   18652 addons.go:70] Setting registry-creds=true in profile "addons-844427"
	I1201 19:06:40.010321   18652 addons.go:239] Setting addon inspektor-gadget=true in "addons-844427"
	I1201 19:06:40.010333   18652 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-844427"
	I1201 19:06:40.010322   18652 addons.go:70] Setting default-storageclass=true in profile "addons-844427"
	I1201 19:06:40.010339   18652 addons.go:70] Setting volumesnapshots=true in profile "addons-844427"
	I1201 19:06:40.010344   18652 addons.go:70] Setting metrics-server=true in profile "addons-844427"
	I1201 19:06:40.010351   18652 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-844427"
	I1201 19:06:40.010355   18652 addons.go:239] Setting addon volumesnapshots=true in "addons-844427"
	I1201 19:06:40.010358   18652 addons.go:239] Setting addon metrics-server=true in "addons-844427"
	I1201 19:06:40.010365   18652 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-844427"
	I1201 19:06:40.010369   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.010374   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.010375   18652 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-844427"
	I1201 19:06:40.010388   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.010413   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.010463   18652 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-844427"
	I1201 19:06:40.010488   18652 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-844427"
	I1201 19:06:40.010510   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.010628   18652 addons.go:70] Setting registry=true in profile "addons-844427"
	I1201 19:06:40.010642   18652 addons.go:239] Setting addon registry=true in "addons-844427"
	I1201 19:06:40.010663   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.010830   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.010928   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.010943   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.010963   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.011086   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.011377   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.010333   18652 addons.go:239] Setting addon registry-creds=true in "addons-844427"
	I1201 19:06:40.011892   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.012363   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.010355   18652 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-844427"
	I1201 19:06:40.010330   18652 addons.go:70] Setting volcano=true in profile "addons-844427"
	I1201 19:06:40.012614   18652 addons.go:70] Setting gcp-auth=true in profile "addons-844427"
	I1201 19:06:40.010322   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.012637   18652 mustload.go:66] Loading cluster: addons-844427
	I1201 19:06:40.012687   18652 addons.go:70] Setting cloud-spanner=true in profile "addons-844427"
	I1201 19:06:40.012702   18652 addons.go:239] Setting addon cloud-spanner=true in "addons-844427"
	I1201 19:06:40.012725   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.012794   18652 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-844427"
	I1201 19:06:40.012853   18652 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-844427"
	I1201 19:06:40.012883   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.013045   18652 addons.go:70] Setting ingress=true in profile "addons-844427"
	I1201 19:06:40.013069   18652 addons.go:239] Setting addon ingress=true in "addons-844427"
	I1201 19:06:40.013103   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.013159   18652 addons.go:70] Setting storage-provisioner=true in profile "addons-844427"
	I1201 19:06:40.013180   18652 addons.go:239] Setting addon storage-provisioner=true in "addons-844427"
	I1201 19:06:40.013203   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.012618   18652 addons.go:239] Setting addon volcano=true in "addons-844427"
	I1201 19:06:40.013244   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.013386   18652 addons.go:70] Setting ingress-dns=true in profile "addons-844427"
	I1201 19:06:40.013427   18652 addons.go:239] Setting addon ingress-dns=true in "addons-844427"
	I1201 19:06:40.013454   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.016622   18652 out.go:179] * Verifying Kubernetes components...
	I1201 19:06:40.018763   18652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 19:06:40.022727   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.023394   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.023987   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.024339   18652 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:06:40.024412   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.024589   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.024605   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.024664   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.025054   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.027467   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.029113   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.054449   18652 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-844427"
	I1201 19:06:40.054499   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.054948   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.056828   18652 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1201 19:06:40.058346   18652 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1201 19:06:40.058680   18652 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1201 19:06:40.058751   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1201 19:06:40.058849   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.061231   18652 out.go:179]   - Using image docker.io/registry:3.0.0
	I1201 19:06:40.064938   18652 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1201 19:06:40.065705   18652 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1201 19:06:40.065745   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1201 19:06:40.065830   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.066875   18652 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1201 19:06:40.066933   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1201 19:06:40.067040   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.089195   18652 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1201 19:06:40.090355   18652 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1201 19:06:40.090413   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1201 19:06:40.090506   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	W1201 19:06:40.105604   18652 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1201 19:06:40.109594   18652 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1201 19:06:40.110871   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1201 19:06:40.110927   18652 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1201 19:06:40.110942   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1201 19:06:40.111018   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.112183   18652 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1201 19:06:40.112197   18652 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1201 19:06:40.112246   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.114107   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.116954   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1201 19:06:40.119941   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1201 19:06:40.121386   18652 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1201 19:06:40.127275   18652 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1201 19:06:40.127315   18652 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1201 19:06:40.127383   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.134580   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1201 19:06:40.134606   18652 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 19:06:40.134681   18652 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1201 19:06:40.139638   18652 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1201 19:06:40.139663   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1201 19:06:40.139724   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.139880   18652 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 19:06:40.139891   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 19:06:40.139948   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.140169   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1201 19:06:40.141647   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1201 19:06:40.143157   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.144632   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1201 19:06:40.145534   18652 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1201 19:06:40.146696   18652 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1201 19:06:40.146832   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1201 19:06:40.147105   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.148701   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.149771   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1201 19:06:40.151061   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1201 19:06:40.151719   18652 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1201 19:06:40.152384   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1201 19:06:40.152430   18652 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1201 19:06:40.152549   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.155260   18652 out.go:179]   - Using image docker.io/busybox:stable
	I1201 19:06:40.157817   18652 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1201 19:06:40.157884   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1201 19:06:40.157971   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.162443   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.166750   18652 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 19:06:40.168632   18652 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 19:06:40.169865   18652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1201 19:06:40.170218   18652 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1201 19:06:40.171604   18652 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1201 19:06:40.172865   18652 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1201 19:06:40.172887   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1201 19:06:40.173170   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.173382   18652 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1201 19:06:40.173394   18652 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1201 19:06:40.173439   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.177348   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.184904   18652 addons.go:239] Setting addon default-storageclass=true in "addons-844427"
	I1201 19:06:40.184962   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.185471   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.211255   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.217969   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.218526   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.223832   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.229052   18652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 19:06:40.230381   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.234553   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.237214   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.237439   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.238618   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	W1201 19:06:40.242157   18652 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1201 19:06:40.242252   18652 retry.go:31] will retry after 246.035147ms: ssh: handshake failed: EOF
	I1201 19:06:40.244959   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.248005   18652 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 19:06:40.248061   18652 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 19:06:40.248140   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.279897   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.328645   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1201 19:06:40.353736   18652 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1201 19:06:40.353755   18652 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1201 19:06:40.370174   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1201 19:06:40.373302   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1201 19:06:40.378144   18652 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1201 19:06:40.378174   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1201 19:06:40.387152   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1201 19:06:40.393602   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1201 19:06:40.394278   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1201 19:06:40.403858   18652 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1201 19:06:40.403894   18652 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1201 19:06:40.405859   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1201 19:06:40.407621   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1201 19:06:40.434914   18652 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1201 19:06:40.434937   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1201 19:06:40.444652   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 19:06:40.446540   18652 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1201 19:06:40.446616   18652 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1201 19:06:40.451249   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 19:06:40.452574   18652 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1201 19:06:40.452592   18652 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1201 19:06:40.475892   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1201 19:06:40.475918   18652 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1201 19:06:40.488345   18652 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1201 19:06:40.488422   18652 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1201 19:06:40.504673   18652 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1201 19:06:40.504706   18652 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1201 19:06:40.507481   18652 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1201 19:06:40.507499   18652 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1201 19:06:40.524329   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1201 19:06:40.524357   18652 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1201 19:06:40.535374   18652 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1201 19:06:40.536694   18652 node_ready.go:35] waiting up to 6m0s for node "addons-844427" to be "Ready" ...
	I1201 19:06:40.538814   18652 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1201 19:06:40.538834   18652 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1201 19:06:40.559103   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1201 19:06:40.559128   18652 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1201 19:06:40.569968   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1201 19:06:40.569997   18652 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1201 19:06:40.579639   18652 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1201 19:06:40.579669   18652 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1201 19:06:40.597302   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1201 19:06:40.604590   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1201 19:06:40.604618   18652 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1201 19:06:40.613593   18652 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 19:06:40.613614   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1201 19:06:40.639481   18652 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1201 19:06:40.639559   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1201 19:06:40.645692   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1201 19:06:40.645762   18652 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1201 19:06:40.668385   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 19:06:40.683439   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1201 19:06:40.697095   18652 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1201 19:06:40.697120   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1201 19:06:40.705802   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1201 19:06:40.755708   18652 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1201 19:06:40.755757   18652 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1201 19:06:40.810359   18652 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1201 19:06:40.810382   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1201 19:06:40.864793   18652 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1201 19:06:40.864823   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1201 19:06:40.899547   18652 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1201 19:06:40.899573   18652 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1201 19:06:40.977834   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1201 19:06:41.041651   18652 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-844427" context rescaled to 1 replicas
	I1201 19:06:41.569174   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.161516722s)
	I1201 19:06:41.569203   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.163309719s)
	I1201 19:06:41.569212   18652 addons.go:495] Verifying addon registry=true in "addons-844427"
	I1201 19:06:41.569226   18652 addons.go:495] Verifying addon ingress=true in "addons-844427"
	I1201 19:06:41.569260   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124522929s)
	I1201 19:06:41.569371   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.118091532s)
	I1201 19:06:41.569457   18652 addons.go:495] Verifying addon metrics-server=true in "addons-844427"
	I1201 19:06:41.571468   18652 out.go:179] * Verifying ingress addon...
	I1201 19:06:41.571478   18652 out.go:179] * Verifying registry addon...
	I1201 19:06:41.573375   18652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1201 19:06:41.573393   18652 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1201 19:06:41.575632   18652 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1201 19:06:41.575649   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:41.575775   18652 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1201 19:06:41.575794   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:42.003820   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.320337499s)
	I1201 19:06:42.003858   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.335432908s)
	W1201 19:06:42.003892   18652 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1201 19:06:42.003965   18652 retry.go:31] will retry after 244.660599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1201 19:06:42.003917   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.298078797s)
	I1201 19:06:42.004213   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.026344329s)
	I1201 19:06:42.004242   18652 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-844427"
	I1201 19:06:42.005482   18652 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-844427 service yakd-dashboard -n yakd-dashboard
	
	I1201 19:06:42.006429   18652 out.go:179] * Verifying csi-hostpath-driver addon...
	I1201 19:06:42.009403   18652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1201 19:06:42.013157   18652 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1201 19:06:42.013177   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:42.112745   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:42.112897   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:42.249102   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 19:06:42.512389   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:42.539541   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:42.613459   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:42.613632   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:43.013197   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:43.113504   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:43.113666   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:43.512165   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:43.576387   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:43.576400   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:44.012608   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:44.076621   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:44.076665   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:44.514225   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:44.615678   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:44.615860   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:44.672863   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.423713426s)
	I1201 19:06:45.012696   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:45.039862   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:45.113581   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:45.113677   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:45.512159   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:45.612916   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:45.613113   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:46.012624   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:46.076692   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:46.076886   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:46.513219   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:46.614389   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:46.614410   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:47.012983   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:47.082074   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:47.082228   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:47.513173   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:47.539001   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:47.613834   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:47.613888   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:47.720458   18652 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1201 19:06:47.720533   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:47.737916   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:47.842312   18652 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1201 19:06:47.854853   18652 addons.go:239] Setting addon gcp-auth=true in "addons-844427"
	I1201 19:06:47.854918   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:47.855253   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:47.872349   18652 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1201 19:06:47.872398   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:47.889127   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:47.984935   18652 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 19:06:47.986188   18652 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1201 19:06:47.987270   18652 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1201 19:06:47.987317   18652 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1201 19:06:48.000264   18652 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1201 19:06:48.000306   18652 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1201 19:06:48.012713   18652 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1201 19:06:48.012732   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1201 19:06:48.014845   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:48.025338   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1201 19:06:48.076070   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:48.076242   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:48.319169   18652 addons.go:495] Verifying addon gcp-auth=true in "addons-844427"
	I1201 19:06:48.320510   18652 out.go:179] * Verifying gcp-auth addon...
	I1201 19:06:48.322314   18652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1201 19:06:48.324385   18652 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1201 19:06:48.324404   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:48.511972   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:48.576680   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:48.576887   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:48.825438   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:49.012365   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:49.113076   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:49.113284   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:49.325609   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:49.512980   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:49.539428   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:49.576955   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:49.577108   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:49.825657   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:50.013467   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:50.076398   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:50.076575   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:50.325380   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:50.511901   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:50.576426   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:50.576570   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:50.825354   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:51.013336   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:51.076143   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:51.076160   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:51.325621   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:51.512602   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:51.539807   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:51.576160   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:51.576305   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:51.825766   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:52.012999   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:52.076674   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:52.076939   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:52.325348   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:52.511961   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:52.576628   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:52.576783   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:52.825416   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:53.013781   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:53.076093   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:53.076277   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:53.325159   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:53.513335   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:53.576856   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:53.576933   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:53.825921   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:54.013814   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:54.039698   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:54.075932   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:54.076054   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:54.325756   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:54.512555   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:54.576376   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:54.576591   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:54.825402   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:55.013470   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:55.076125   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:55.076379   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:55.324857   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:55.514914   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:55.576705   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:55.576924   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:55.825363   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:56.013931   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:56.076793   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:56.076865   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:56.325378   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:56.511870   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:56.538890   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:56.576331   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:56.576521   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:56.825751   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:57.013766   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:57.076958   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:57.076960   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:57.325733   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:57.512692   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:57.576078   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:57.576257   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:57.825807   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:58.013390   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:58.076317   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:58.076465   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:58.324955   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:58.512660   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:58.539734   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:58.576072   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:58.576337   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:58.824577   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:59.012157   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:59.075752   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:59.075864   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:59.325525   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:59.512934   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:59.576738   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:59.576863   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:59.825187   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:00.013118   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:00.076080   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:00.076092   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:00.325624   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:00.512141   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:00.576004   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:00.576119   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:00.825560   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:01.013934   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:01.039158   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:01.076516   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:01.076627   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:01.325168   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:01.513315   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:01.575992   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:01.576200   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:01.825605   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:02.013229   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:02.076135   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:02.076311   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:02.324861   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:02.512453   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:02.576502   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:02.576575   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:02.824825   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:03.014091   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:03.076434   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:03.076676   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:03.325197   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:03.512818   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:03.539072   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:03.576450   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:03.576526   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:03.825228   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:04.013756   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:04.076370   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:04.076451   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:04.324726   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:04.512371   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:04.576336   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:04.576501   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:04.824942   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:05.013510   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:05.075743   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:05.075910   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:05.325477   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:05.512252   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:05.539440   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:05.576711   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:05.576886   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:05.825476   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:06.013130   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:06.075654   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:06.075923   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:06.325270   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:06.512782   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:06.576277   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:06.576404   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:06.824840   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:07.013518   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:07.075945   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:07.076096   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:07.325630   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:07.512015   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:07.576416   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:07.576470   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:07.824970   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:08.014742   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:08.039702   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:08.075861   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:08.076109   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:08.325609   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:08.512316   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:08.576226   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:08.576238   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:08.825494   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:09.012031   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:09.076298   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:09.076393   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:09.324754   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:09.512221   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:09.576639   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:09.576846   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:09.825412   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:10.013578   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:10.076185   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:10.076325   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:10.324880   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:10.512502   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:10.539826   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:10.576382   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:10.576459   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:10.824990   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:11.013740   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:11.076412   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:11.076467   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:11.324879   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:11.512748   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:11.576411   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:11.576606   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:11.825199   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:12.013281   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:12.076487   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:12.076678   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:12.325030   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:12.512694   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:12.539991   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:12.576203   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:12.576446   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:12.825622   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:13.013609   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:13.076184   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:13.076311   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:13.324533   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:13.512221   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:13.575723   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:13.575929   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:13.825767   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:14.015015   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:14.076760   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:14.076791   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:14.325495   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:14.512074   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:14.575965   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:14.576172   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:14.825661   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:15.013373   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:15.039352   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:15.076659   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:15.076790   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:15.325231   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:15.513091   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:15.575753   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:15.575925   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:15.825447   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:16.013030   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:16.075751   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:16.075910   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:16.325567   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:16.512017   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:16.576337   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:16.576530   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:16.824974   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:17.013552   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:17.039734   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:17.075881   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:17.076110   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:17.325802   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:17.512395   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:17.575977   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:17.576079   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:17.825208   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:18.013621   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:18.076482   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:18.076599   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:18.325047   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:18.512639   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:18.576128   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:18.576327   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:18.825647   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:19.014173   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:19.076650   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:19.076879   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:19.325228   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:19.512015   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:19.539343   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:19.576582   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:19.576730   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:19.825215   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:20.013677   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:20.076509   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:20.076564   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:20.324961   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:20.512666   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:20.576351   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:20.576480   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:20.824877   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:21.014176   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:21.076753   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:21.076839   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:21.325277   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:21.512755   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:21.540028   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:21.576624   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:21.576775   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:21.825315   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:22.014056   18652 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1201 19:07:22.014121   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:22.042663   18652 node_ready.go:49] node "addons-844427" is "Ready"
	I1201 19:07:22.042697   18652 node_ready.go:38] duration metric: took 41.505967081s for node "addons-844427" to be "Ready" ...
	I1201 19:07:22.042712   18652 api_server.go:52] waiting for apiserver process to appear ...
	I1201 19:07:22.042773   18652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 19:07:22.063926   18652 api_server.go:72] duration metric: took 42.053851261s to wait for apiserver process to appear ...
	I1201 19:07:22.063960   18652 api_server.go:88] waiting for apiserver healthz status ...
	I1201 19:07:22.063980   18652 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1201 19:07:22.068917   18652 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1201 19:07:22.069861   18652 api_server.go:141] control plane version: v1.34.2
	I1201 19:07:22.069889   18652 api_server.go:131] duration metric: took 5.922111ms to wait for apiserver health ...
	I1201 19:07:22.069901   18652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 19:07:22.072978   18652 system_pods.go:59] 20 kube-system pods found
	I1201 19:07:22.073003   18652 system_pods.go:61] "amd-gpu-device-plugin-wbc9c" [6ca4c03d-f88e-406c-b3e8-b6bcfbe29679] Pending
	I1201 19:07:22.073011   18652 system_pods.go:61] "coredns-66bc5c9577-kt5tx" [264990f1-f9da-44b2-ad29-b8cdcecb9afb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 19:07:22.073017   18652 system_pods.go:61] "csi-hostpath-attacher-0" [0c3538f8-06a8-4fa3-b51d-a5e520c50e99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 19:07:22.073025   18652 system_pods.go:61] "csi-hostpath-resizer-0" [1db28f9e-10a7-4f49-bcf0-86998196b714] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 19:07:22.073030   18652 system_pods.go:61] "csi-hostpathplugin-84njl" [c23cedcd-a53d-41cf-9118-65184e70cdc3] Pending
	I1201 19:07:22.073034   18652 system_pods.go:61] "etcd-addons-844427" [177151f8-ecd3-4545-9a62-01d57af0366b] Running
	I1201 19:07:22.073037   18652 system_pods.go:61] "kindnet-p8gkr" [499a9c16-5c7c-48c6-a18f-3ecb339b2c70] Running
	I1201 19:07:22.073041   18652 system_pods.go:61] "kube-apiserver-addons-844427" [91316d2a-487b-4a2e-af31-70574739fa1a] Running
	I1201 19:07:22.073046   18652 system_pods.go:61] "kube-controller-manager-addons-844427" [828949ee-77a1-43de-837c-f1dbfcf2b113] Running
	I1201 19:07:22.073052   18652 system_pods.go:61] "kube-ingress-dns-minikube" [fe7698cc-abf6-4874-96ee-f8997a752123] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 19:07:22.073056   18652 system_pods.go:61] "kube-proxy-7w28c" [0835d5c6-1a10-4422-b30e-4221ef70767e] Running
	I1201 19:07:22.073062   18652 system_pods.go:61] "kube-scheduler-addons-844427" [26545d56-e884-4a86-9c4f-ac0fc2a96bf4] Running
	I1201 19:07:22.073069   18652 system_pods.go:61] "metrics-server-85b7d694d7-xs4wl" [211a4016-77bf-43b3-8765-24567cae6b45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 19:07:22.073072   18652 system_pods.go:61] "nvidia-device-plugin-daemonset-v667z" [444c689e-7ffe-4f0d-8b96-34c161bc1ef5] Pending
	I1201 19:07:22.073080   18652 system_pods.go:61] "registry-6b586f9694-g722r" [aab1ac21-3d9b-432a-9c79-77419a1e6c3e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 19:07:22.073085   18652 system_pods.go:61] "registry-creds-764b6fb674-sqhck" [f6be056a-d2f0-4bd2-a225-0755fd0d6439] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 19:07:22.073090   18652 system_pods.go:61] "registry-proxy-q7742" [f6fe9017-d264-4a76-a4d4-9947815e6804] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 19:07:22.073096   18652 system_pods.go:61] "snapshot-controller-7d9fbc56b8-977vf" [650a675e-0f4d-4749-9455-36a2f0b18162] Pending
	I1201 19:07:22.073099   18652 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9ghp7" [a06ad8ab-2315-4eb7-8ca7-9e9838ceb101] Pending
	I1201 19:07:22.073104   18652 system_pods.go:61] "storage-provisioner" [ab094890-359e-4017-b2e7-33117da16c40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 19:07:22.073117   18652 system_pods.go:74] duration metric: took 3.210649ms to wait for pod list to return data ...
	I1201 19:07:22.073124   18652 default_sa.go:34] waiting for default service account to be created ...
	I1201 19:07:22.075079   18652 default_sa.go:45] found service account: "default"
	I1201 19:07:22.075098   18652 default_sa.go:55] duration metric: took 1.967093ms for default service account to be created ...
	I1201 19:07:22.075107   18652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 19:07:22.076781   18652 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1201 19:07:22.076799   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:22.077146   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:22.079384   18652 system_pods.go:86] 20 kube-system pods found
	I1201 19:07:22.079412   18652 system_pods.go:89] "amd-gpu-device-plugin-wbc9c" [6ca4c03d-f88e-406c-b3e8-b6bcfbe29679] Pending
	I1201 19:07:22.079428   18652 system_pods.go:89] "coredns-66bc5c9577-kt5tx" [264990f1-f9da-44b2-ad29-b8cdcecb9afb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 19:07:22.079437   18652 system_pods.go:89] "csi-hostpath-attacher-0" [0c3538f8-06a8-4fa3-b51d-a5e520c50e99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 19:07:22.079451   18652 system_pods.go:89] "csi-hostpath-resizer-0" [1db28f9e-10a7-4f49-bcf0-86998196b714] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 19:07:22.079457   18652 system_pods.go:89] "csi-hostpathplugin-84njl" [c23cedcd-a53d-41cf-9118-65184e70cdc3] Pending
	I1201 19:07:22.079463   18652 system_pods.go:89] "etcd-addons-844427" [177151f8-ecd3-4545-9a62-01d57af0366b] Running
	I1201 19:07:22.079469   18652 system_pods.go:89] "kindnet-p8gkr" [499a9c16-5c7c-48c6-a18f-3ecb339b2c70] Running
	I1201 19:07:22.079476   18652 system_pods.go:89] "kube-apiserver-addons-844427" [91316d2a-487b-4a2e-af31-70574739fa1a] Running
	I1201 19:07:22.079486   18652 system_pods.go:89] "kube-controller-manager-addons-844427" [828949ee-77a1-43de-837c-f1dbfcf2b113] Running
	I1201 19:07:22.079496   18652 system_pods.go:89] "kube-ingress-dns-minikube" [fe7698cc-abf6-4874-96ee-f8997a752123] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 19:07:22.079502   18652 system_pods.go:89] "kube-proxy-7w28c" [0835d5c6-1a10-4422-b30e-4221ef70767e] Running
	I1201 19:07:22.079510   18652 system_pods.go:89] "kube-scheduler-addons-844427" [26545d56-e884-4a86-9c4f-ac0fc2a96bf4] Running
	I1201 19:07:22.079546   18652 system_pods.go:89] "metrics-server-85b7d694d7-xs4wl" [211a4016-77bf-43b3-8765-24567cae6b45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 19:07:22.079556   18652 system_pods.go:89] "nvidia-device-plugin-daemonset-v667z" [444c689e-7ffe-4f0d-8b96-34c161bc1ef5] Pending
	I1201 19:07:22.079566   18652 system_pods.go:89] "registry-6b586f9694-g722r" [aab1ac21-3d9b-432a-9c79-77419a1e6c3e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 19:07:22.079589   18652 system_pods.go:89] "registry-creds-764b6fb674-sqhck" [f6be056a-d2f0-4bd2-a225-0755fd0d6439] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 19:07:22.079655   18652 system_pods.go:89] "registry-proxy-q7742" [f6fe9017-d264-4a76-a4d4-9947815e6804] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 19:07:22.079702   18652 system_pods.go:89] "snapshot-controller-7d9fbc56b8-977vf" [650a675e-0f4d-4749-9455-36a2f0b18162] Pending
	I1201 19:07:22.079727   18652 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9ghp7" [a06ad8ab-2315-4eb7-8ca7-9e9838ceb101] Pending
	I1201 19:07:22.079743   18652 system_pods.go:89] "storage-provisioner" [ab094890-359e-4017-b2e7-33117da16c40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 19:07:22.079760   18652 retry.go:31] will retry after 204.891889ms: missing components: kube-dns
	I1201 19:07:22.290187   18652 system_pods.go:86] 20 kube-system pods found
	I1201 19:07:22.290229   18652 system_pods.go:89] "amd-gpu-device-plugin-wbc9c" [6ca4c03d-f88e-406c-b3e8-b6bcfbe29679] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1201 19:07:22.290240   18652 system_pods.go:89] "coredns-66bc5c9577-kt5tx" [264990f1-f9da-44b2-ad29-b8cdcecb9afb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 19:07:22.290250   18652 system_pods.go:89] "csi-hostpath-attacher-0" [0c3538f8-06a8-4fa3-b51d-a5e520c50e99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 19:07:22.290259   18652 system_pods.go:89] "csi-hostpath-resizer-0" [1db28f9e-10a7-4f49-bcf0-86998196b714] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 19:07:22.290267   18652 system_pods.go:89] "csi-hostpathplugin-84njl" [c23cedcd-a53d-41cf-9118-65184e70cdc3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1201 19:07:22.290278   18652 system_pods.go:89] "etcd-addons-844427" [177151f8-ecd3-4545-9a62-01d57af0366b] Running
	I1201 19:07:22.290302   18652 system_pods.go:89] "kindnet-p8gkr" [499a9c16-5c7c-48c6-a18f-3ecb339b2c70] Running
	I1201 19:07:22.290316   18652 system_pods.go:89] "kube-apiserver-addons-844427" [91316d2a-487b-4a2e-af31-70574739fa1a] Running
	I1201 19:07:22.290322   18652 system_pods.go:89] "kube-controller-manager-addons-844427" [828949ee-77a1-43de-837c-f1dbfcf2b113] Running
	I1201 19:07:22.290330   18652 system_pods.go:89] "kube-ingress-dns-minikube" [fe7698cc-abf6-4874-96ee-f8997a752123] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 19:07:22.290335   18652 system_pods.go:89] "kube-proxy-7w28c" [0835d5c6-1a10-4422-b30e-4221ef70767e] Running
	I1201 19:07:22.290341   18652 system_pods.go:89] "kube-scheduler-addons-844427" [26545d56-e884-4a86-9c4f-ac0fc2a96bf4] Running
	I1201 19:07:22.290349   18652 system_pods.go:89] "metrics-server-85b7d694d7-xs4wl" [211a4016-77bf-43b3-8765-24567cae6b45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 19:07:22.290362   18652 system_pods.go:89] "nvidia-device-plugin-daemonset-v667z" [444c689e-7ffe-4f0d-8b96-34c161bc1ef5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1201 19:07:22.290371   18652 system_pods.go:89] "registry-6b586f9694-g722r" [aab1ac21-3d9b-432a-9c79-77419a1e6c3e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 19:07:22.290378   18652 system_pods.go:89] "registry-creds-764b6fb674-sqhck" [f6be056a-d2f0-4bd2-a225-0755fd0d6439] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 19:07:22.290395   18652 system_pods.go:89] "registry-proxy-q7742" [f6fe9017-d264-4a76-a4d4-9947815e6804] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 19:07:22.290407   18652 system_pods.go:89] "snapshot-controller-7d9fbc56b8-977vf" [650a675e-0f4d-4749-9455-36a2f0b18162] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 19:07:22.290422   18652 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9ghp7" [a06ad8ab-2315-4eb7-8ca7-9e9838ceb101] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 19:07:22.290435   18652 system_pods.go:89] "storage-provisioner" [ab094890-359e-4017-b2e7-33117da16c40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 19:07:22.290453   18652 retry.go:31] will retry after 320.941489ms: missing components: kube-dns
	I1201 19:07:22.389122   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:22.512911   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:22.613758   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:22.613841   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:22.615655   18652 system_pods.go:86] 20 kube-system pods found
	I1201 19:07:22.615678   18652 system_pods.go:89] "amd-gpu-device-plugin-wbc9c" [6ca4c03d-f88e-406c-b3e8-b6bcfbe29679] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1201 19:07:22.615684   18652 system_pods.go:89] "coredns-66bc5c9577-kt5tx" [264990f1-f9da-44b2-ad29-b8cdcecb9afb] Running
	I1201 19:07:22.615690   18652 system_pods.go:89] "csi-hostpath-attacher-0" [0c3538f8-06a8-4fa3-b51d-a5e520c50e99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 19:07:22.615695   18652 system_pods.go:89] "csi-hostpath-resizer-0" [1db28f9e-10a7-4f49-bcf0-86998196b714] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 19:07:22.615702   18652 system_pods.go:89] "csi-hostpathplugin-84njl" [c23cedcd-a53d-41cf-9118-65184e70cdc3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1201 19:07:22.615710   18652 system_pods.go:89] "etcd-addons-844427" [177151f8-ecd3-4545-9a62-01d57af0366b] Running
	I1201 19:07:22.615718   18652 system_pods.go:89] "kindnet-p8gkr" [499a9c16-5c7c-48c6-a18f-3ecb339b2c70] Running
	I1201 19:07:22.615722   18652 system_pods.go:89] "kube-apiserver-addons-844427" [91316d2a-487b-4a2e-af31-70574739fa1a] Running
	I1201 19:07:22.615725   18652 system_pods.go:89] "kube-controller-manager-addons-844427" [828949ee-77a1-43de-837c-f1dbfcf2b113] Running
	I1201 19:07:22.615732   18652 system_pods.go:89] "kube-ingress-dns-minikube" [fe7698cc-abf6-4874-96ee-f8997a752123] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 19:07:22.615739   18652 system_pods.go:89] "kube-proxy-7w28c" [0835d5c6-1a10-4422-b30e-4221ef70767e] Running
	I1201 19:07:22.615743   18652 system_pods.go:89] "kube-scheduler-addons-844427" [26545d56-e884-4a86-9c4f-ac0fc2a96bf4] Running
	I1201 19:07:22.615748   18652 system_pods.go:89] "metrics-server-85b7d694d7-xs4wl" [211a4016-77bf-43b3-8765-24567cae6b45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 19:07:22.615754   18652 system_pods.go:89] "nvidia-device-plugin-daemonset-v667z" [444c689e-7ffe-4f0d-8b96-34c161bc1ef5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1201 19:07:22.615762   18652 system_pods.go:89] "registry-6b586f9694-g722r" [aab1ac21-3d9b-432a-9c79-77419a1e6c3e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 19:07:22.615769   18652 system_pods.go:89] "registry-creds-764b6fb674-sqhck" [f6be056a-d2f0-4bd2-a225-0755fd0d6439] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 19:07:22.615776   18652 system_pods.go:89] "registry-proxy-q7742" [f6fe9017-d264-4a76-a4d4-9947815e6804] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 19:07:22.615781   18652 system_pods.go:89] "snapshot-controller-7d9fbc56b8-977vf" [650a675e-0f4d-4749-9455-36a2f0b18162] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 19:07:22.615786   18652 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9ghp7" [a06ad8ab-2315-4eb7-8ca7-9e9838ceb101] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 19:07:22.615790   18652 system_pods.go:89] "storage-provisioner" [ab094890-359e-4017-b2e7-33117da16c40] Running
	I1201 19:07:22.615797   18652 system_pods.go:126] duration metric: took 540.6839ms to wait for k8s-apps to be running ...
	I1201 19:07:22.615805   18652 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 19:07:22.615844   18652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:07:22.630138   18652 system_svc.go:56] duration metric: took 14.310923ms WaitForService to wait for kubelet
	I1201 19:07:22.630171   18652 kubeadm.go:587] duration metric: took 42.620102274s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 19:07:22.630191   18652 node_conditions.go:102] verifying NodePressure condition ...
	I1201 19:07:22.632688   18652 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 19:07:22.632710   18652 node_conditions.go:123] node cpu capacity is 8
	I1201 19:07:22.632725   18652 node_conditions.go:105] duration metric: took 2.529514ms to run NodePressure ...
	I1201 19:07:22.632737   18652 start.go:242] waiting for startup goroutines ...
	I1201 19:07:22.825644   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:23.014028   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:23.076521   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:23.076596   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:23.325253   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:23.514507   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:23.577032   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:23.577241   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:23.825628   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:24.014836   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:24.076580   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:24.076605   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:24.325661   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:24.513380   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:24.577712   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:24.577722   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:24.825074   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:25.014219   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:25.076746   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:25.076760   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:25.325436   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:25.514198   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:25.614147   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:25.614160   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:25.826648   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:26.012855   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:26.076303   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:26.076420   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:26.327592   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:26.514376   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:26.614951   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:26.615078   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:26.826122   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:27.016083   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:27.077102   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:27.077212   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:27.325936   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:27.512987   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:27.576613   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:27.576629   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:27.825164   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:28.014705   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:28.076454   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:28.076491   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:28.325321   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:28.513492   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:28.577396   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:28.577651   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:28.825381   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:29.014800   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:29.076816   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:29.077497   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:29.325226   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:29.512725   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:29.576822   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:29.576944   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:29.826549   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:30.015353   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:30.077414   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:30.077475   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:30.325425   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:30.696498   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:30.696572   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:30.696605   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:30.825894   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:31.014363   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:31.077395   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:31.077395   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:31.325716   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:31.513535   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:31.577668   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:31.577681   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:31.825175   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:32.015498   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:32.077316   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:32.077418   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:32.326063   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:32.513496   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:32.615137   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:32.615421   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:32.825986   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:33.015684   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:33.076431   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:33.076469   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:33.325253   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:33.513643   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:33.576880   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:33.577025   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:33.826723   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:34.014684   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:34.077035   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:34.077158   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:34.325885   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:34.513335   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:34.600143   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:34.600161   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:34.826517   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:35.014681   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:35.076561   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:35.076602   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:35.325428   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:35.514653   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:35.577207   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:35.577459   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:35.825674   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:36.015894   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:36.077356   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:36.077495   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:36.325191   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:36.513756   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:36.577842   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:36.577872   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:36.825993   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:37.015275   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:37.077224   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:37.077385   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:37.325003   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:37.529997   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:37.576975   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:37.577045   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:37.826155   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:38.015542   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:38.077134   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:38.077165   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:38.325705   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:38.513042   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:38.613772   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:38.613983   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:38.825536   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:39.013889   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:39.113806   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:39.114024   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:39.326349   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:39.513109   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:39.576853   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:39.577199   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:39.826837   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:40.014348   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:40.076707   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:40.076870   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:40.325432   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:40.513875   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:40.576413   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:40.577645   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:40.825073   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:41.015093   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:41.077138   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:41.077272   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:41.325902   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:41.512724   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:41.577444   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:41.577451   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:41.825573   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:42.013556   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:42.076060   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:42.076237   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:42.325684   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:42.512766   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:42.576218   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:42.576157   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:42.826090   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:43.017504   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:43.076790   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:43.077416   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:43.325867   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:43.596542   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:43.596856   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:43.596953   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:43.825865   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:44.015901   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:44.076508   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:44.076638   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:44.325351   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:44.512473   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:44.577002   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:44.577047   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:44.826200   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:45.014179   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:45.076602   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:45.076623   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:45.325596   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:45.512863   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:45.576768   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:45.576785   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:45.825938   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:46.015778   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:46.115976   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:46.116159   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:46.325528   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:46.512379   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:46.618477   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:46.619131   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:46.827336   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:47.017047   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:47.077126   18652 kapi.go:107] duration metric: took 1m5.503746372s to wait for kubernetes.io/minikube-addons=registry ...
	I1201 19:07:47.077213   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:47.325992   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:47.515153   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:47.577281   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:47.826679   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:48.014756   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:48.076516   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:48.326142   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:48.513802   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:48.614376   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:48.825961   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:49.015011   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:49.077341   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:49.325168   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:49.513073   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:49.576654   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:49.825466   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:50.014103   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:50.077045   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:50.325879   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:50.513203   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:50.614658   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:50.828549   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:51.015630   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:51.076545   18652 kapi.go:107] duration metric: took 1m9.503145797s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1201 19:07:51.326753   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:51.513068   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:51.826168   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:52.013694   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:52.325533   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:52.512486   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:52.826922   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:53.016479   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:53.326110   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:53.512982   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:53.825770   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:54.013803   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:54.326390   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:54.516172   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:54.825634   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:55.013788   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:55.325382   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:55.513150   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:55.826586   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:56.014515   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:56.325345   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:56.512456   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:56.825419   18652 kapi.go:107] duration metric: took 1m8.503101444s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1201 19:07:56.827680   18652 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-844427 cluster.
	I1201 19:07:56.829168   18652 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1201 19:07:56.830725   18652 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1201 19:07:57.013083   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:57.581441   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:58.014672   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:58.514034   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:59.014615   18652 kapi.go:107] duration metric: took 1m17.005211485s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1201 19:07:59.016457   18652 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, amd-gpu-device-plugin, inspektor-gadget, cloud-spanner, registry-creds, storage-provisioner, metrics-server, default-storageclass, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1201 19:07:59.017822   18652 addons.go:530] duration metric: took 1m19.007710614s for enable addons: enabled=[nvidia-device-plugin ingress-dns amd-gpu-device-plugin inspektor-gadget cloud-spanner registry-creds storage-provisioner metrics-server default-storageclass yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1201 19:07:59.017869   18652 start.go:247] waiting for cluster config update ...
	I1201 19:07:59.017892   18652 start.go:256] writing updated cluster config ...
	I1201 19:07:59.018200   18652 ssh_runner.go:195] Run: rm -f paused
	I1201 19:07:59.022112   18652 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 19:07:59.024741   18652 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kt5tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.028530   18652 pod_ready.go:94] pod "coredns-66bc5c9577-kt5tx" is "Ready"
	I1201 19:07:59.028552   18652 pod_ready.go:86] duration metric: took 3.79006ms for pod "coredns-66bc5c9577-kt5tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.030078   18652 pod_ready.go:83] waiting for pod "etcd-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.033025   18652 pod_ready.go:94] pod "etcd-addons-844427" is "Ready"
	I1201 19:07:59.033041   18652 pod_ready.go:86] duration metric: took 2.944917ms for pod "etcd-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.035096   18652 pod_ready.go:83] waiting for pod "kube-apiserver-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.038936   18652 pod_ready.go:94] pod "kube-apiserver-addons-844427" is "Ready"
	I1201 19:07:59.038952   18652 pod_ready.go:86] duration metric: took 3.824909ms for pod "kube-apiserver-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.040541   18652 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.426072   18652 pod_ready.go:94] pod "kube-controller-manager-addons-844427" is "Ready"
	I1201 19:07:59.426100   18652 pod_ready.go:86] duration metric: took 385.541798ms for pod "kube-controller-manager-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.626191   18652 pod_ready.go:83] waiting for pod "kube-proxy-7w28c" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:08:00.025673   18652 pod_ready.go:94] pod "kube-proxy-7w28c" is "Ready"
	I1201 19:08:00.025699   18652 pod_ready.go:86] duration metric: took 399.482252ms for pod "kube-proxy-7w28c" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:08:00.225474   18652 pod_ready.go:83] waiting for pod "kube-scheduler-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:08:00.625975   18652 pod_ready.go:94] pod "kube-scheduler-addons-844427" is "Ready"
	I1201 19:08:00.626004   18652 pod_ready.go:86] duration metric: took 400.505594ms for pod "kube-scheduler-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:08:00.626021   18652 pod_ready.go:40] duration metric: took 1.603881774s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 19:08:00.669462   18652 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 19:08:00.671275   18652 out.go:179] * Done! kubectl is now configured to use "addons-844427" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.034315816Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-wpp29/POD" id=7f8d9702-eaf2-4083-b809-729146c0e798 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.034400642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.041403832Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-wpp29 Namespace:default ID:aa11e9fe5ebe2df4023edc0efefed55b55d3f6dcdc77221adc8ffe769f1f23a3 UID:5869bfc5-dffb-483c-8b09-062d4baccad4 NetNS:/var/run/netns/3233196a-6a90-4a3e-85c9-570ff86ebe92 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad28}] Aliases:map[]}"
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.041432832Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-wpp29 to CNI network \"kindnet\" (type=ptp)"
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.051862689Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-wpp29 Namespace:default ID:aa11e9fe5ebe2df4023edc0efefed55b55d3f6dcdc77221adc8ffe769f1f23a3 UID:5869bfc5-dffb-483c-8b09-062d4baccad4 NetNS:/var/run/netns/3233196a-6a90-4a3e-85c9-570ff86ebe92 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad28}] Aliases:map[]}"
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.052005732Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-wpp29 for CNI network kindnet (type=ptp)"
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.052960197Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.05415375Z" level=info msg="Ran pod sandbox aa11e9fe5ebe2df4023edc0efefed55b55d3f6dcdc77221adc8ffe769f1f23a3 with infra container: default/hello-world-app-5d498dc89-wpp29/POD" id=7f8d9702-eaf2-4083-b809-729146c0e798 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.05550017Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5b934925-9abd-4f28-b859-937d1fdb6edc name=/runtime.v1.ImageService/ImageStatus
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.055625846Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=5b934925-9abd-4f28-b859-937d1fdb6edc name=/runtime.v1.ImageService/ImageStatus
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.0556803Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=5b934925-9abd-4f28-b859-937d1fdb6edc name=/runtime.v1.ImageService/ImageStatus
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.056306619Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=d441c5b2-2f15-4474-8ca3-d85065e08fac name=/runtime.v1.ImageService/PullImage
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.06097796Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.446156123Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=d441c5b2-2f15-4474-8ca3-d85065e08fac name=/runtime.v1.ImageService/PullImage
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.446692893Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d4a0ac5d-5374-4152-832e-c4f394a280b4 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.448075757Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=8e660ebf-1825-49cc-abfc-e1592ab58daf name=/runtime.v1.ImageService/ImageStatus
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.451708608Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-wpp29/hello-world-app" id=7ce59482-a734-40e9-94af-d5bfddc56d8d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.45181653Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.457943761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.458210504Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1378c7076b3b2d7d80dbc1927c7672f2a561f321513c899ef183d5b0fa9e4d32/merged/etc/passwd: no such file or directory"
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.458383458Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1378c7076b3b2d7d80dbc1927c7672f2a561f321513c899ef183d5b0fa9e4d32/merged/etc/group: no such file or directory"
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.458647288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.491763448Z" level=info msg="Created container ce618e8109b38e947cecbdb42a9983bcca08149567005ee3e681131adffe24c9: default/hello-world-app-5d498dc89-wpp29/hello-world-app" id=7ce59482-a734-40e9-94af-d5bfddc56d8d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.49243659Z" level=info msg="Starting container: ce618e8109b38e947cecbdb42a9983bcca08149567005ee3e681131adffe24c9" id=73f95438-172b-4baf-9ce9-15c761d85446 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 19:10:42 addons-844427 crio[770]: time="2025-12-01T19:10:42.494356145Z" level=info msg="Started container" PID=9567 containerID=ce618e8109b38e947cecbdb42a9983bcca08149567005ee3e681131adffe24c9 description=default/hello-world-app-5d498dc89-wpp29/hello-world-app id=73f95438-172b-4baf-9ce9-15c761d85446 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa11e9fe5ebe2df4023edc0efefed55b55d3f6dcdc77221adc8ffe769f1f23a3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	ce618e8109b38       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   aa11e9fe5ebe2       hello-world-app-5d498dc89-wpp29            default
	61f20aea73577       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   b537f44f74b47       registry-creds-764b6fb674-sqhck            kube-system
	649b66037e430       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   efcf265bab6ca       nginx                                      default
	fb90e673f579f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   ec4bca8d0366b       busybox                                    default
	ea87d05f6e32f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   49accf8e63351       csi-hostpathplugin-84njl                   kube-system
	ce685fdd387b8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   ed8504c79dc5f       gcp-auth-78565c9fb4-67cg8                  gcp-auth
	9a5fa01966568       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   49accf8e63351       csi-hostpathplugin-84njl                   kube-system
	9079f8e7ee755       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   49accf8e63351       csi-hostpathplugin-84njl                   kube-system
	9ad4ad6057500       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   49accf8e63351       csi-hostpathplugin-84njl                   kube-system
	86bd88fd13749       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   69ebf86d7c6c0       gadget-vbz2n                               gadget
	ca14100103757       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   49accf8e63351       csi-hostpathplugin-84njl                   kube-system
	a70e8f8cd00b4       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago            Running             controller                               0                   4fbbb364a4e0f       ingress-nginx-controller-6c8bf45fb-4rcgb   ingress-nginx
	f6fc7935fddb5       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   05b9091cf35ad       registry-proxy-q7742                       kube-system
	d3bb04d9d3c1d       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   84df82f524e05       nvidia-device-plugin-daemonset-v667z       kube-system
	9f5f39915b7c1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   7d256bc8602ab       amd-gpu-device-plugin-wbc9c                kube-system
	7c8ad6d89b920       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   0a5b664c09097       csi-hostpath-resizer-0                     kube-system
	fff64f001bd5f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   49accf8e63351       csi-hostpathplugin-84njl                   kube-system
	eb1180791d4aa       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   b1e493ea11dc8       snapshot-controller-7d9fbc56b8-977vf       kube-system
	1a8c85353220f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              patch                                    0                   b719f0f081972       ingress-nginx-admission-patch-znqvm        ingress-nginx
	016dfc96303af       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   54a81db21ebc5       snapshot-controller-7d9fbc56b8-9ghp7       kube-system
	b8934753229d8       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   2630379b2779e       yakd-dashboard-5ff678cb9-dmddq             yakd-dashboard
	f0e51975105af       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              create                                   0                   004e4d36f4dac       ingress-nginx-admission-create-7mpw6       ingress-nginx
	f0949ee283560       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   008b484423ddd       csi-hostpath-attacher-0                    kube-system
	9c0d148e12238       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   557a9a8fcdc81       local-path-provisioner-648f6765c9-qzbbn    local-path-storage
	38134e01f2871       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   d41da208176d3       metrics-server-85b7d694d7-xs4wl            kube-system
	1b74364792d43       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   4a7f73b525029       kube-ingress-dns-minikube                  kube-system
	1a5f66e8aa183       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   8e4c553f5fff3       registry-6b586f9694-g722r                  kube-system
	33dd26e97b2ff       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   03bb054ab5de4       cloud-spanner-emulator-5bdddb765-wxltm     default
	840acaec38326       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   b5f9c56975193       storage-provisioner                        kube-system
	d2bdc76e2c839       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   a9d70c3b19c3b       coredns-66bc5c9577-kt5tx                   kube-system
	260635ba17a06       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             4 minutes ago            Running             kube-proxy                               0                   dd9f627436c9a       kube-proxy-7w28c                           kube-system
	83e6fdffcf712       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   e9eca24863587       kindnet-p8gkr                              kube-system
	3db6a1c2f5cc4       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   747804815b279       kube-scheduler-addons-844427               kube-system
	08674a3640b68       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   9e50ce607e3ef       kube-apiserver-addons-844427               kube-system
	58571469b8e13       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   ae333106d829b       etcd-addons-844427                         kube-system
	e6177f5ff208e       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   122ff4d46e5c8       kube-controller-manager-addons-844427      kube-system
	
	
	==> coredns [d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced] <==
	[INFO] 10.244.0.22:54272 - 41620 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129495s
	[INFO] 10.244.0.22:48560 - 15298 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005282148s
	[INFO] 10.244.0.22:34168 - 55997 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00732326s
	[INFO] 10.244.0.22:43134 - 10486 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005522831s
	[INFO] 10.244.0.22:43871 - 62542 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007501145s
	[INFO] 10.244.0.22:33089 - 17800 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007585467s
	[INFO] 10.244.0.22:43899 - 65332 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007696369s
	[INFO] 10.244.0.22:59405 - 51136 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000836097s
	[INFO] 10.244.0.22:44400 - 39171 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001196545s
	[INFO] 10.244.0.28:45077 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000218336s
	[INFO] 10.244.0.28:45751 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000125333s
	[INFO] 10.244.0.31:35910 - 10338 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000264998s
	[INFO] 10.244.0.31:58381 - 16603 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000378225s
	[INFO] 10.244.0.31:48912 - 40235 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000103248s
	[INFO] 10.244.0.31:58064 - 22144 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000173185s
	[INFO] 10.244.0.31:41512 - 11252 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000089533s
	[INFO] 10.244.0.31:38923 - 2463 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000148833s
	[INFO] 10.244.0.31:38711 - 4052 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.00661215s
	[INFO] 10.244.0.31:54899 - 52138 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.006712231s
	[INFO] 10.244.0.31:51197 - 26885 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.006083332s
	[INFO] 10.244.0.31:36364 - 3714 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.007389672s
	[INFO] 10.244.0.31:38446 - 58631 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.007125338s
	[INFO] 10.244.0.31:37262 - 25066 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.007299299s
	[INFO] 10.244.0.31:45085 - 49403 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001872579s
	[INFO] 10.244.0.31:34082 - 42487 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.002562538s
	
	
	==> describe nodes <==
	Name:               addons-844427
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-844427
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=addons-844427
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T19_06_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-844427
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-844427"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 19:06:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-844427
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 19:10:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 19:10:08 +0000   Mon, 01 Dec 2025 19:06:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 19:10:08 +0000   Mon, 01 Dec 2025 19:06:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 19:10:08 +0000   Mon, 01 Dec 2025 19:06:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 19:10:08 +0000   Mon, 01 Dec 2025 19:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-844427
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                0cb0da7c-3b5c-4a34-a77d-6a324b2594f4
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  default                     cloud-spanner-emulator-5bdddb765-wxltm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  default                     hello-world-app-5d498dc89-wpp29             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-vbz2n                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  gcp-auth                    gcp-auth-78565c9fb4-67cg8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-4rcgb    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m2s
	  kube-system                 amd-gpu-device-plugin-wbc9c                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 coredns-66bc5c9577-kt5tx                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 csi-hostpathplugin-84njl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 etcd-addons-844427                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m9s
	  kube-system                 kindnet-p8gkr                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m4s
	  kube-system                 kube-apiserver-addons-844427                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-controller-manager-addons-844427       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-proxy-7w28c                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-addons-844427                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 metrics-server-85b7d694d7-xs4wl             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m2s
	  kube-system                 nvidia-device-plugin-daemonset-v667z        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 registry-6b586f9694-g722r                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 registry-creds-764b6fb674-sqhck             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 registry-proxy-q7742                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 snapshot-controller-7d9fbc56b8-977vf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-9ghp7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  local-path-storage          local-path-provisioner-648f6765c9-qzbbn     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-dmddq              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m14s)  kubelet          Node addons-844427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m14s)  kubelet          Node addons-844427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x8 over 4m14s)  kubelet          Node addons-844427 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m9s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet          Node addons-844427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet          Node addons-844427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet          Node addons-844427 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m4s                   node-controller  Node addons-844427 event: Registered Node addons-844427 in Controller
	  Normal  NodeReady                3m22s                  kubelet          Node addons-844427 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091158] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023654] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.003803] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 1 19:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.060605] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023816] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023874] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +2.047751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +4.031647] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +8.063094] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[Dec 1 19:09] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[ +32.252518] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	
	
	==> etcd [58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f] <==
	{"level":"warn","ts":"2025-12-01T19:06:31.705482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.712067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.721164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.732218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.739749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.758512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.762413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.770327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.776752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.831160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:42.430156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:42.436693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:07:09.224642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:07:09.231254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:07:09.248085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:07:09.254466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:07:30.694577Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.473028ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-01T19:07:30.694575Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.368422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-01T19:07:30.694674Z","caller":"traceutil/trace.go:172","msg":"trace[1707364333] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1007; }","duration":"182.589675ms","start":"2025-12-01T19:07:30.512069Z","end":"2025-12-01T19:07:30.694659Z","steps":["trace[1707364333] 'range keys from in-memory index tree'  (duration: 182.39346ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:07:30.694687Z","caller":"traceutil/trace.go:172","msg":"trace[646499937] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1007; }","duration":"119.50243ms","start":"2025-12-01T19:07:30.575174Z","end":"2025-12-01T19:07:30.694677Z","steps":["trace[646499937] 'range keys from in-memory index tree'  (duration: 119.286722ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-01T19:07:30.694583Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.398225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-01T19:07:30.694722Z","caller":"traceutil/trace.go:172","msg":"trace[1487492697] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1007; }","duration":"119.538301ms","start":"2025-12-01T19:07:30.575174Z","end":"2025-12-01T19:07:30.694712Z","steps":["trace[1487492697] 'range keys from in-memory index tree'  (duration: 119.324242ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:07:49.747098Z","caller":"traceutil/trace.go:172","msg":"trace[669268451] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"108.048788ms","start":"2025-12-01T19:07:49.639034Z","end":"2025-12-01T19:07:49.747083Z","steps":["trace[669268451] 'process raft request'  (duration: 106.548585ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:08:28.805369Z","caller":"traceutil/trace.go:172","msg":"trace[530951056] transaction","detail":"{read_only:false; response_revision:1398; number_of_response:1; }","duration":"121.448094ms","start":"2025-12-01T19:08:28.683900Z","end":"2025-12-01T19:08:28.805348Z","steps":["trace[530951056] 'process raft request'  (duration: 121.315974ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:08:28.867962Z","caller":"traceutil/trace.go:172","msg":"trace[2032891266] transaction","detail":"{read_only:false; response_revision:1399; number_of_response:1; }","duration":"161.228774ms","start":"2025-12-01T19:08:28.706717Z","end":"2025-12-01T19:08:28.867945Z","steps":["trace[2032891266] 'process raft request'  (duration: 161.151156ms)"],"step_count":1}
	
	
	==> gcp-auth [ce685fdd387b8219a747fbbc8f9350a09da3aa15276223c93502f01f6831a292] <==
	2025/12/01 19:07:56 GCP Auth Webhook started!
	2025/12/01 19:08:00 Ready to marshal response ...
	2025/12/01 19:08:00 Ready to write response ...
	2025/12/01 19:08:01 Ready to marshal response ...
	2025/12/01 19:08:01 Ready to write response ...
	2025/12/01 19:08:01 Ready to marshal response ...
	2025/12/01 19:08:01 Ready to write response ...
	2025/12/01 19:08:13 Ready to marshal response ...
	2025/12/01 19:08:13 Ready to write response ...
	2025/12/01 19:08:13 Ready to marshal response ...
	2025/12/01 19:08:13 Ready to write response ...
	2025/12/01 19:08:21 Ready to marshal response ...
	2025/12/01 19:08:21 Ready to write response ...
	2025/12/01 19:08:21 Ready to marshal response ...
	2025/12/01 19:08:21 Ready to write response ...
	2025/12/01 19:08:21 Ready to marshal response ...
	2025/12/01 19:08:21 Ready to write response ...
	2025/12/01 19:08:34 Ready to marshal response ...
	2025/12/01 19:08:34 Ready to write response ...
	2025/12/01 19:09:06 Ready to marshal response ...
	2025/12/01 19:09:06 Ready to write response ...
	2025/12/01 19:10:41 Ready to marshal response ...
	2025/12/01 19:10:41 Ready to write response ...
	
	
	==> kernel <==
	 19:10:43 up 53 min,  0 user,  load average: 0.20, 0.64, 0.34
	Linux addons-844427 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6] <==
	I1201 19:08:41.467162       1 main.go:301] handling current node
	I1201 19:08:51.467205       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:08:51.467251       1 main.go:301] handling current node
	I1201 19:09:01.469472       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:09:01.469505       1 main.go:301] handling current node
	I1201 19:09:11.466871       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:09:11.466903       1 main.go:301] handling current node
	I1201 19:09:21.467503       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:09:21.467532       1 main.go:301] handling current node
	I1201 19:09:31.465605       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:09:31.465655       1 main.go:301] handling current node
	I1201 19:09:41.465700       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:09:41.465744       1 main.go:301] handling current node
	I1201 19:09:51.469392       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:09:51.469420       1 main.go:301] handling current node
	I1201 19:10:01.474485       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:10:01.474515       1 main.go:301] handling current node
	I1201 19:10:11.465334       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:10:11.465393       1 main.go:301] handling current node
	I1201 19:10:21.465856       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:10:21.465884       1 main.go:301] handling current node
	I1201 19:10:31.469982       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:10:31.470022       1 main.go:301] handling current node
	I1201 19:10:41.465716       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:10:41.465749       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1201 19:07:34.611928       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.158.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.158.23:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.158.23:443: connect: connection refused" logger="UnhandledError"
	W1201 19:07:35.613572       1 handler_proxy.go:99] no RequestInfo found in the context
	W1201 19:07:35.613601       1 handler_proxy.go:99] no RequestInfo found in the context
	E1201 19:07:35.613639       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1201 19:07:35.613655       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1201 19:07:35.613656       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1201 19:07:35.614798       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1201 19:07:39.623365       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.158.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.158.23:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	W1201 19:07:39.623378       1 handler_proxy.go:99] no RequestInfo found in the context
	E1201 19:07:39.623459       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1201 19:07:39.633252       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1201 19:08:10.384367       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33602: use of closed network connection
	E1201 19:08:10.526877       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33626: use of closed network connection
	I1201 19:08:21.646132       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1201 19:08:21.826703       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.85.183"}
	I1201 19:08:45.665686       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1201 19:10:41.800734       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.143.17"}
	
	
	==> kube-controller-manager [e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6] <==
	I1201 19:06:39.211575       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1201 19:06:39.212314       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1201 19:06:39.212454       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1201 19:06:39.212477       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1201 19:06:39.214900       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1201 19:06:39.214922       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 19:06:39.214935       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 19:06:39.214972       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1201 19:06:39.215038       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 19:06:39.215095       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 19:06:39.215105       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 19:06:39.215112       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 19:06:39.220939       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-844427" podCIDRs=["10.244.0.0/24"]
	I1201 19:06:39.220963       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1201 19:06:39.233162       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1201 19:07:09.218963       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1201 19:07:09.219101       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1201 19:07:09.219137       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1201 19:07:09.239654       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1201 19:07:09.242961       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1201 19:07:09.319797       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 19:07:09.343303       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 19:07:24.214577       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1201 19:07:39.324905       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1201 19:07:39.350414       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5] <==
	I1201 19:06:41.226219       1 server_linux.go:53] "Using iptables proxy"
	I1201 19:06:41.347808       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 19:06:41.448188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 19:06:41.448231       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1201 19:06:41.448344       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 19:06:41.479863       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 19:06:41.479906       1 server_linux.go:132] "Using iptables Proxier"
	I1201 19:06:41.485752       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 19:06:41.491114       1 server.go:527] "Version info" version="v1.34.2"
	I1201 19:06:41.491373       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 19:06:41.493614       1 config.go:106] "Starting endpoint slice config controller"
	I1201 19:06:41.493648       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 19:06:41.493748       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 19:06:41.493781       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 19:06:41.493816       1 config.go:200] "Starting service config controller"
	I1201 19:06:41.493827       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 19:06:41.494019       1 config.go:309] "Starting node config controller"
	I1201 19:06:41.494026       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 19:06:41.594242       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 19:06:41.594272       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 19:06:41.594317       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 19:06:41.594325       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750] <==
	E1201 19:06:32.229517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 19:06:32.229750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 19:06:32.229850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 19:06:32.229897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 19:06:32.229904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 19:06:32.229899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 19:06:32.229959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 19:06:32.230084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 19:06:32.230107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1201 19:06:32.230174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 19:06:32.229891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 19:06:32.230176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 19:06:32.230191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1201 19:06:32.230220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 19:06:32.230281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 19:06:32.230378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 19:06:32.230379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 19:06:33.130753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 19:06:33.170903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 19:06:33.189107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 19:06:33.192118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 19:06:33.339162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 19:06:33.379187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 19:06:33.445795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1201 19:06:33.726762       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 19:09:09 addons-844427 kubelet[1291]: I1201 19:09:09.362414    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q7742" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 19:09:13 addons-844427 kubelet[1291]: I1201 19:09:13.707642    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/13bbef40-4fb6-49a4-8bc2-3dfc41166c87-gcp-creds\") pod \"13bbef40-4fb6-49a4-8bc2-3dfc41166c87\" (UID: \"13bbef40-4fb6-49a4-8bc2-3dfc41166c87\") "
	Dec 01 19:09:13 addons-844427 kubelet[1291]: I1201 19:09:13.707718    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm68k\" (UniqueName: \"kubernetes.io/projected/13bbef40-4fb6-49a4-8bc2-3dfc41166c87-kube-api-access-cm68k\") pod \"13bbef40-4fb6-49a4-8bc2-3dfc41166c87\" (UID: \"13bbef40-4fb6-49a4-8bc2-3dfc41166c87\") "
	Dec 01 19:09:13 addons-844427 kubelet[1291]: I1201 19:09:13.707794    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13bbef40-4fb6-49a4-8bc2-3dfc41166c87-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "13bbef40-4fb6-49a4-8bc2-3dfc41166c87" (UID: "13bbef40-4fb6-49a4-8bc2-3dfc41166c87"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 01 19:09:13 addons-844427 kubelet[1291]: I1201 19:09:13.707836    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^34d33296-cee9-11f0-b1f7-3a203d4ca2bd\") pod \"13bbef40-4fb6-49a4-8bc2-3dfc41166c87\" (UID: \"13bbef40-4fb6-49a4-8bc2-3dfc41166c87\") "
	Dec 01 19:09:13 addons-844427 kubelet[1291]: I1201 19:09:13.708001    1291 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/13bbef40-4fb6-49a4-8bc2-3dfc41166c87-gcp-creds\") on node \"addons-844427\" DevicePath \"\""
	Dec 01 19:09:13 addons-844427 kubelet[1291]: I1201 19:09:13.710356    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13bbef40-4fb6-49a4-8bc2-3dfc41166c87-kube-api-access-cm68k" (OuterVolumeSpecName: "kube-api-access-cm68k") pod "13bbef40-4fb6-49a4-8bc2-3dfc41166c87" (UID: "13bbef40-4fb6-49a4-8bc2-3dfc41166c87"). InnerVolumeSpecName "kube-api-access-cm68k". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 01 19:09:13 addons-844427 kubelet[1291]: I1201 19:09:13.711249    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^34d33296-cee9-11f0-b1f7-3a203d4ca2bd" (OuterVolumeSpecName: "task-pv-storage") pod "13bbef40-4fb6-49a4-8bc2-3dfc41166c87" (UID: "13bbef40-4fb6-49a4-8bc2-3dfc41166c87"). InnerVolumeSpecName "pvc-a5be050a-229f-449e-8b94-4228f43a0e2f". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 01 19:09:13 addons-844427 kubelet[1291]: I1201 19:09:13.809114    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cm68k\" (UniqueName: \"kubernetes.io/projected/13bbef40-4fb6-49a4-8bc2-3dfc41166c87-kube-api-access-cm68k\") on node \"addons-844427\" DevicePath \"\""
	Dec 01 19:09:13 addons-844427 kubelet[1291]: I1201 19:09:13.809179    1291 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-a5be050a-229f-449e-8b94-4228f43a0e2f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^34d33296-cee9-11f0-b1f7-3a203d4ca2bd\") on node \"addons-844427\" "
	Dec 01 19:09:13 addons-844427 kubelet[1291]: I1201 19:09:13.813361    1291 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-a5be050a-229f-449e-8b94-4228f43a0e2f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^34d33296-cee9-11f0-b1f7-3a203d4ca2bd") on node "addons-844427"
	Dec 01 19:09:13 addons-844427 kubelet[1291]: I1201 19:09:13.910462    1291 reconciler_common.go:299] "Volume detached for volume \"pvc-a5be050a-229f-449e-8b94-4228f43a0e2f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^34d33296-cee9-11f0-b1f7-3a203d4ca2bd\") on node \"addons-844427\" DevicePath \"\""
	Dec 01 19:09:14 addons-844427 kubelet[1291]: I1201 19:09:14.014140    1291 scope.go:117] "RemoveContainer" containerID="304b03139c723771d7959e152cacef59a211d224d934bbe106b628edde5137fb"
	Dec 01 19:09:14 addons-844427 kubelet[1291]: I1201 19:09:14.023777    1291 scope.go:117] "RemoveContainer" containerID="304b03139c723771d7959e152cacef59a211d224d934bbe106b628edde5137fb"
	Dec 01 19:09:14 addons-844427 kubelet[1291]: E1201 19:09:14.024222    1291 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"304b03139c723771d7959e152cacef59a211d224d934bbe106b628edde5137fb\": container with ID starting with 304b03139c723771d7959e152cacef59a211d224d934bbe106b628edde5137fb not found: ID does not exist" containerID="304b03139c723771d7959e152cacef59a211d224d934bbe106b628edde5137fb"
	Dec 01 19:09:14 addons-844427 kubelet[1291]: I1201 19:09:14.024262    1291 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"304b03139c723771d7959e152cacef59a211d224d934bbe106b628edde5137fb"} err="failed to get container status \"304b03139c723771d7959e152cacef59a211d224d934bbe106b628edde5137fb\": rpc error: code = NotFound desc = could not find container \"304b03139c723771d7959e152cacef59a211d224d934bbe106b628edde5137fb\": container with ID starting with 304b03139c723771d7959e152cacef59a211d224d934bbe106b628edde5137fb not found: ID does not exist"
	Dec 01 19:09:14 addons-844427 kubelet[1291]: I1201 19:09:14.364949    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13bbef40-4fb6-49a4-8bc2-3dfc41166c87" path="/var/lib/kubelet/pods/13bbef40-4fb6-49a4-8bc2-3dfc41166c87/volumes"
	Dec 01 19:09:24 addons-844427 kubelet[1291]: E1201 19:09:24.920231    1291 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-sqhck" podUID="f6be056a-d2f0-4bd2-a225-0755fd0d6439"
	Dec 01 19:09:40 addons-844427 kubelet[1291]: I1201 19:09:40.122662    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-sqhck" podStartSLOduration=179.463641394 podStartE2EDuration="3m0.12264042s" podCreationTimestamp="2025-12-01 19:06:40 +0000 UTC" firstStartedPulling="2025-12-01 19:09:38.383787897 +0000 UTC m=+184.102596760" lastFinishedPulling="2025-12-01 19:09:39.042786916 +0000 UTC m=+184.761595786" observedRunningTime="2025-12-01 19:09:40.122135092 +0000 UTC m=+185.840943986" watchObservedRunningTime="2025-12-01 19:09:40.12264042 +0000 UTC m=+185.841449306"
	Dec 01 19:10:02 addons-844427 kubelet[1291]: I1201 19:10:02.362276    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wbc9c" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 19:10:13 addons-844427 kubelet[1291]: I1201 19:10:13.362783    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-v667z" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 19:10:35 addons-844427 kubelet[1291]: I1201 19:10:35.362014    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q7742" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 19:10:41 addons-844427 kubelet[1291]: I1201 19:10:41.826645    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5869bfc5-dffb-483c-8b09-062d4baccad4-gcp-creds\") pod \"hello-world-app-5d498dc89-wpp29\" (UID: \"5869bfc5-dffb-483c-8b09-062d4baccad4\") " pod="default/hello-world-app-5d498dc89-wpp29"
	Dec 01 19:10:41 addons-844427 kubelet[1291]: I1201 19:10:41.826780    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbghb\" (UniqueName: \"kubernetes.io/projected/5869bfc5-dffb-483c-8b09-062d4baccad4-kube-api-access-vbghb\") pod \"hello-world-app-5d498dc89-wpp29\" (UID: \"5869bfc5-dffb-483c-8b09-062d4baccad4\") " pod="default/hello-world-app-5d498dc89-wpp29"
	Dec 01 19:10:43 addons-844427 kubelet[1291]: I1201 19:10:43.355931    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-wpp29" podStartSLOduration=1.9643644249999999 podStartE2EDuration="2.355909341s" podCreationTimestamp="2025-12-01 19:10:41 +0000 UTC" firstStartedPulling="2025-12-01 19:10:42.055949402 +0000 UTC m=+247.774758280" lastFinishedPulling="2025-12-01 19:10:42.447494331 +0000 UTC m=+248.166303196" observedRunningTime="2025-12-01 19:10:43.355458378 +0000 UTC m=+249.074267261" watchObservedRunningTime="2025-12-01 19:10:43.355909341 +0000 UTC m=+249.074718225"
	
	
	==> storage-provisioner [840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30] <==
	W1201 19:10:19.188363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:21.191534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:21.215496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:23.218825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:23.222279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:25.224636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:25.228207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:27.231002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:27.235543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:29.238641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:29.242386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:31.245813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:31.249696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:33.252670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:33.256392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:35.259011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:35.262623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:37.266106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:37.270274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:39.273401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:39.277026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:41.279616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:41.283167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:43.286145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:43.291061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-844427 -n addons-844427
helpers_test.go:269: (dbg) Run:  kubectl --context addons-844427 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-7mpw6 ingress-nginx-admission-patch-znqvm
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-844427 describe pod ingress-nginx-admission-create-7mpw6 ingress-nginx-admission-patch-znqvm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-844427 describe pod ingress-nginx-admission-create-7mpw6 ingress-nginx-admission-patch-znqvm: exit status 1 (55.720472ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7mpw6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-znqvm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-844427 describe pod ingress-nginx-admission-create-7mpw6 ingress-nginx-admission-patch-znqvm: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (246.3567ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:10:44.231783   33042 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:10:44.231929   33042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:10:44.231938   33042 out.go:374] Setting ErrFile to fd 2...
	I1201 19:10:44.231942   33042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:10:44.232112   33042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:10:44.232395   33042 mustload.go:66] Loading cluster: addons-844427
	I1201 19:10:44.232761   33042 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:10:44.232806   33042 addons.go:622] checking whether the cluster is paused
	I1201 19:10:44.232896   33042 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:10:44.232911   33042 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:10:44.233270   33042 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:10:44.252409   33042 ssh_runner.go:195] Run: systemctl --version
	I1201 19:10:44.252469   33042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:10:44.270796   33042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:10:44.369204   33042 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:10:44.369283   33042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:10:44.398205   33042 cri.go:89] found id: "61f20aea735772c8f41a604a1cd85f5fe06be2d825113acdc6b4d3aac8c05336"
	I1201 19:10:44.398232   33042 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:10:44.398236   33042 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:10:44.398239   33042 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:10:44.398242   33042 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:10:44.398249   33042 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:10:44.398253   33042 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:10:44.398255   33042 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:10:44.398258   33042 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:10:44.398267   33042 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:10:44.398270   33042 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:10:44.398272   33042 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:10:44.398275   33042 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:10:44.398278   33042 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:10:44.398281   33042 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:10:44.398309   33042 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:10:44.398319   33042 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:10:44.398326   33042 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:10:44.398330   33042 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:10:44.398332   33042 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:10:44.398338   33042 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:10:44.398345   33042 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:10:44.398348   33042 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:10:44.398353   33042 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:10:44.398356   33042 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:10:44.398359   33042 cri.go:89] found id: ""
	I1201 19:10:44.398407   33042 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:10:44.412364   33042 out.go:203] 
	W1201 19:10:44.413404   33042 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:10:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:10:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:10:44.413427   33042 out.go:285] * 
	* 
	W1201 19:10:44.417160   33042 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:10:44.418536   33042 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable ingress --alsologtostderr -v=1: exit status 11 (238.300245ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:10:44.478142   33105 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:10:44.478444   33105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:10:44.478453   33105 out.go:374] Setting ErrFile to fd 2...
	I1201 19:10:44.478458   33105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:10:44.478655   33105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:10:44.478888   33105 mustload.go:66] Loading cluster: addons-844427
	I1201 19:10:44.479217   33105 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:10:44.479235   33105 addons.go:622] checking whether the cluster is paused
	I1201 19:10:44.479350   33105 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:10:44.479370   33105 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:10:44.479717   33105 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:10:44.496912   33105 ssh_runner.go:195] Run: systemctl --version
	I1201 19:10:44.496954   33105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:10:44.513740   33105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:10:44.611086   33105 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:10:44.611182   33105 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:10:44.639417   33105 cri.go:89] found id: "61f20aea735772c8f41a604a1cd85f5fe06be2d825113acdc6b4d3aac8c05336"
	I1201 19:10:44.639440   33105 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:10:44.639444   33105 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:10:44.639447   33105 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:10:44.639456   33105 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:10:44.639460   33105 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:10:44.639463   33105 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:10:44.639465   33105 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:10:44.639468   33105 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:10:44.639473   33105 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:10:44.639476   33105 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:10:44.639479   33105 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:10:44.639482   33105 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:10:44.639485   33105 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:10:44.639487   33105 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:10:44.639492   33105 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:10:44.639497   33105 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:10:44.639501   33105 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:10:44.639504   33105 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:10:44.639506   33105 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:10:44.639509   33105 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:10:44.639511   33105 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:10:44.639514   33105 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:10:44.639520   33105 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:10:44.639523   33105 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:10:44.639526   33105 cri.go:89] found id: ""
	I1201 19:10:44.639564   33105 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:10:44.652975   33105 out.go:203] 
	W1201 19:10:44.654037   33105 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:10:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:10:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:10:44.654059   33105 out.go:285] * 
	* 
	W1201 19:10:44.657010   33105 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:10:44.658109   33105 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-vbz2n" [6e15a859-9aa0-4d63-bf12-07abb2c6a5be] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004036774s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (242.291636ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:08:31.669234   29883 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:08:31.669527   29883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:31.669537   29883 out.go:374] Setting ErrFile to fd 2...
	I1201 19:08:31.669541   29883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:31.669737   29883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:08:31.669973   29883 mustload.go:66] Loading cluster: addons-844427
	I1201 19:08:31.670243   29883 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:31.670258   29883 addons.go:622] checking whether the cluster is paused
	I1201 19:08:31.670345   29883 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:31.670359   29883 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:08:31.670707   29883 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:08:31.688616   29883 ssh_runner.go:195] Run: systemctl --version
	I1201 19:08:31.688688   29883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:08:31.706359   29883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:08:31.803927   29883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:08:31.803994   29883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:08:31.832435   29883 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:08:31.832452   29883 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:08:31.832456   29883 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:08:31.832460   29883 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:08:31.832463   29883 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:08:31.832467   29883 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:08:31.832469   29883 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:08:31.832472   29883 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:08:31.832474   29883 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:08:31.832482   29883 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:08:31.832485   29883 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:08:31.832488   29883 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:08:31.832492   29883 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:08:31.832496   29883 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:08:31.832515   29883 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:08:31.832534   29883 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:08:31.832544   29883 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:08:31.832548   29883 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:08:31.832551   29883 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:08:31.832553   29883 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:08:31.832556   29883 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:08:31.832559   29883 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:08:31.832562   29883 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:08:31.832564   29883 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:08:31.832567   29883 cri.go:89] found id: ""
	I1201 19:08:31.832610   29883 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:08:31.847540   29883 out.go:203] 
	W1201 19:08:31.849006   29883 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:08:31.849024   29883 out.go:285] * 
	* 
	W1201 19:08:31.851996   29883 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:08:31.853219   29883 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.181617ms
I1201 19:08:21.306420   16873 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1201 19:08:21.306442   16873 kapi.go:107] duration metric: took 3.029495ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-xs4wl" [211a4016-77bf-43b3-8765-24567cae6b45] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004134628s
addons_test.go:463: (dbg) Run:  kubectl --context addons-844427 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (243.619359ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:08:26.420154   29364 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:08:26.420471   29364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:26.420482   29364 out.go:374] Setting ErrFile to fd 2...
	I1201 19:08:26.420489   29364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:26.420697   29364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:08:26.420966   29364 mustload.go:66] Loading cluster: addons-844427
	I1201 19:08:26.421305   29364 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:26.421327   29364 addons.go:622] checking whether the cluster is paused
	I1201 19:08:26.421430   29364 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:26.421451   29364 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:08:26.421855   29364 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:08:26.439559   29364 ssh_runner.go:195] Run: systemctl --version
	I1201 19:08:26.439609   29364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:08:26.457920   29364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:08:26.556743   29364 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:08:26.556820   29364 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:08:26.585111   29364 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:08:26.585129   29364 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:08:26.585133   29364 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:08:26.585136   29364 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:08:26.585139   29364 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:08:26.585142   29364 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:08:26.585145   29364 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:08:26.585148   29364 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:08:26.585151   29364 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:08:26.585155   29364 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:08:26.585158   29364 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:08:26.585161   29364 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:08:26.585164   29364 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:08:26.585166   29364 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:08:26.585169   29364 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:08:26.585177   29364 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:08:26.585180   29364 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:08:26.585184   29364 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:08:26.585187   29364 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:08:26.585190   29364 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:08:26.585193   29364 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:08:26.585196   29364 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:08:26.585198   29364 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:08:26.585201   29364 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:08:26.585204   29364 cri.go:89] found id: ""
	I1201 19:08:26.585237   29364 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:08:26.599038   29364 out.go:203] 
	W1201 19:08:26.600402   29364 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:08:26.600422   29364 out.go:285] * 
	* 
	W1201 19:08:26.603311   29364 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:08:26.604740   29364 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1201 19:08:21.303427   16873 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.039657ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-844427 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-844427 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [f90e65f2-1599-484b-b8af-24bdd179fe8c] Pending
helpers_test.go:352: "task-pv-pod" [f90e65f2-1599-484b-b8af-24bdd179fe8c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [f90e65f2-1599-484b-b8af-24bdd179fe8c] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.00405161s
addons_test.go:572: (dbg) Run:  kubectl --context addons-844427 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-844427 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-844427 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-844427 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-844427 delete pod task-pv-pod: (1.199821218s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-844427 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-844427 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-844427 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [13bbef40-4fb6-49a4-8bc2-3dfc41166c87] Pending
helpers_test.go:352: "task-pv-pod-restore" [13bbef40-4fb6-49a4-8bc2-3dfc41166c87] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [13bbef40-4fb6-49a4-8bc2-3dfc41166c87] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003053668s
addons_test.go:614: (dbg) Run:  kubectl --context addons-844427 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-844427 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-844427 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (244.109738ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:09:14.410520   30960 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:09:14.410862   30960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:09:14.410874   30960 out.go:374] Setting ErrFile to fd 2...
	I1201 19:09:14.410880   30960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:09:14.411233   30960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:09:14.411611   30960 mustload.go:66] Loading cluster: addons-844427
	I1201 19:09:14.412112   30960 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:09:14.412140   30960 addons.go:622] checking whether the cluster is paused
	I1201 19:09:14.412276   30960 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:09:14.412310   30960 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:09:14.412869   30960 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:09:14.431280   30960 ssh_runner.go:195] Run: systemctl --version
	I1201 19:09:14.431356   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:09:14.448207   30960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:09:14.545907   30960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:09:14.546009   30960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:09:14.574123   30960 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:09:14.574143   30960 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:09:14.574150   30960 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:09:14.574155   30960 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:09:14.574160   30960 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:09:14.574166   30960 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:09:14.574171   30960 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:09:14.574175   30960 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:09:14.574179   30960 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:09:14.574195   30960 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:09:14.574200   30960 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:09:14.574205   30960 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:09:14.574209   30960 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:09:14.574214   30960 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:09:14.574218   30960 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:09:14.574226   30960 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:09:14.574233   30960 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:09:14.574240   30960 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:09:14.574244   30960 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:09:14.574248   30960 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:09:14.574254   30960 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:09:14.574262   30960 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:09:14.574269   30960 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:09:14.574276   30960 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:09:14.574282   30960 cri.go:89] found id: ""
	I1201 19:09:14.574353   30960 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:09:14.588874   30960 out.go:203] 
	W1201 19:09:14.590005   30960 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:09:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:09:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:09:14.590023   30960 out.go:285] * 
	* 
	W1201 19:09:14.593074   30960 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:09:14.594384   30960 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (244.774919ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:09:14.655254   31036 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:09:14.655559   31036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:09:14.655569   31036 out.go:374] Setting ErrFile to fd 2...
	I1201 19:09:14.655573   31036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:09:14.655774   31036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:09:14.656022   31036 mustload.go:66] Loading cluster: addons-844427
	I1201 19:09:14.656317   31036 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:09:14.656338   31036 addons.go:622] checking whether the cluster is paused
	I1201 19:09:14.656415   31036 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:09:14.656430   31036 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:09:14.656780   31036 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:09:14.674441   31036 ssh_runner.go:195] Run: systemctl --version
	I1201 19:09:14.674492   31036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:09:14.693153   31036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:09:14.790791   31036 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:09:14.790877   31036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:09:14.819002   31036 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:09:14.819022   31036 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:09:14.819026   31036 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:09:14.819029   31036 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:09:14.819033   31036 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:09:14.819036   31036 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:09:14.819039   31036 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:09:14.819041   31036 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:09:14.819044   31036 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:09:14.819054   31036 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:09:14.819058   31036 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:09:14.819060   31036 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:09:14.819070   31036 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:09:14.819073   31036 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:09:14.819076   31036 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:09:14.819088   31036 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:09:14.819096   31036 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:09:14.819100   31036 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:09:14.819103   31036 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:09:14.819106   31036 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:09:14.819109   31036 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:09:14.819111   31036 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:09:14.819114   31036 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:09:14.819117   31036 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:09:14.819120   31036 cri.go:89] found id: ""
	I1201 19:09:14.819155   31036 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:09:14.833217   31036 out.go:203] 
	W1201 19:09:14.834553   31036 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:09:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:09:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:09:14.834577   31036 out.go:285] * 
	* 
	W1201 19:09:14.837585   31036 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:09:14.838945   31036 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (53.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-844427 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-844427 --alsologtostderr -v=1: exit status 11 (241.317598ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:08:10.831329   26845 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:08:10.831470   26845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:10.831480   26845 out.go:374] Setting ErrFile to fd 2...
	I1201 19:08:10.831484   26845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:10.831667   26845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:08:10.831907   26845 mustload.go:66] Loading cluster: addons-844427
	I1201 19:08:10.832202   26845 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:10.832220   26845 addons.go:622] checking whether the cluster is paused
	I1201 19:08:10.832316   26845 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:10.832331   26845 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:08:10.832729   26845 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:08:10.850255   26845 ssh_runner.go:195] Run: systemctl --version
	I1201 19:08:10.850324   26845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:08:10.867388   26845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:08:10.964809   26845 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:08:10.964905   26845 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:08:10.992644   26845 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:08:10.992663   26845 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:08:10.992667   26845 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:08:10.992671   26845 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:08:10.992674   26845 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:08:10.992678   26845 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:08:10.992681   26845 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:08:10.992684   26845 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:08:10.992686   26845 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:08:10.992702   26845 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:08:10.992706   26845 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:08:10.992709   26845 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:08:10.992715   26845 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:08:10.992718   26845 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:08:10.992721   26845 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:08:10.992725   26845 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:08:10.992731   26845 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:08:10.992736   26845 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:08:10.992739   26845 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:08:10.992742   26845 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:08:10.992749   26845 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:08:10.992751   26845 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:08:10.992754   26845 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:08:10.992757   26845 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:08:10.992759   26845 cri.go:89] found id: ""
	I1201 19:08:10.992796   26845 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:08:11.007375   26845 out.go:203] 
	W1201 19:08:11.008634   26845 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:08:11.008657   26845 out.go:285] * 
	* 
	W1201 19:08:11.012518   26845 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:08:11.014177   26845 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-844427 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-844427
helpers_test.go:243: (dbg) docker inspect addons-844427:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13",
	        "Created": "2025-12-01T19:06:21.064128042Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19295,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T19:06:21.096067188Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13/hostname",
	        "HostsPath": "/var/lib/docker/containers/7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13/hosts",
	        "LogPath": "/var/lib/docker/containers/7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13/7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13-json.log",
	        "Name": "/addons-844427",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-844427:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-844427",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7984c52a63dca6dc77e6e291478883c6c2d99639199cd9d19ae15e7d78acde13",
	                "LowerDir": "/var/lib/docker/overlay2/023709fae24e3caaa3f947705049d04de1d3be5d4edbe25c0e28164a1aa1c1b3-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/023709fae24e3caaa3f947705049d04de1d3be5d4edbe25c0e28164a1aa1c1b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/023709fae24e3caaa3f947705049d04de1d3be5d4edbe25c0e28164a1aa1c1b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/023709fae24e3caaa3f947705049d04de1d3be5d4edbe25c0e28164a1aa1c1b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-844427",
	                "Source": "/var/lib/docker/volumes/addons-844427/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-844427",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-844427",
	                "name.minikube.sigs.k8s.io": "addons-844427",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d80dd366e8d03cdc0a1ffc3bed3384f1926667f9916154cdc91fe88cd863e7db",
	            "SandboxKey": "/var/run/docker/netns/d80dd366e8d0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-844427": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef8689c1e4ee442dd8327401d7cb77b76fb85fe450034dbb88cb010ecfdb389c",
	                    "EndpointID": "9fbbcd729ac40848418db03bba1b61c5588ccf7fabc589afcbdd667e865dd380",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "9a:b6:c6:da:70:21",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-844427",
	                        "7984c52a63dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-844427 -n addons-844427
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-844427 logs -n 25: (1.098495533s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-874273 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-874273   │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ delete  │ -p download-only-874273                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-874273   │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ start   │ -o=json --download-only -p download-only-883422 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-883422   │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ delete  │ -p download-only-883422                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-883422   │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ start   │ -o=json --download-only -p download-only-590206 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-590206   │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ delete  │ -p download-only-590206                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-590206   │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ delete  │ -p download-only-874273                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-874273   │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ delete  │ -p download-only-883422                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-883422   │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ delete  │ -p download-only-590206                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-590206   │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ start   │ --download-only -p download-docker-082948 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-082948 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ delete  │ -p download-docker-082948                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-082948 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ start   │ --download-only -p binary-mirror-325932 --alsologtostderr --binary-mirror http://127.0.0.1:37241 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-325932   │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ delete  │ -p binary-mirror-325932                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-325932   │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ addons  │ enable dashboard -p addons-844427                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-844427          │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ addons  │ disable dashboard -p addons-844427                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-844427          │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ start   │ -p addons-844427 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-844427          │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-844427 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-844427          │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ addons-844427 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-844427          │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ enable headlamp -p addons-844427 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-844427          │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 19:05:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 19:05:59.446368   18652 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:05:59.446454   18652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:59.446458   18652 out.go:374] Setting ErrFile to fd 2...
	I1201 19:05:59.446462   18652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:59.446642   18652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:05:59.447095   18652 out.go:368] Setting JSON to false
	I1201 19:05:59.447853   18652 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2910,"bootTime":1764613049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:05:59.447914   18652 start.go:143] virtualization: kvm guest
	I1201 19:05:59.449739   18652 out.go:179] * [addons-844427] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:05:59.450886   18652 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:05:59.450912   18652 notify.go:221] Checking for updates...
	I1201 19:05:59.453373   18652 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:05:59.454620   18652 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:05:59.455794   18652 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 19:05:59.456926   18652 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:05:59.458060   18652 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:05:59.459240   18652 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:05:59.483493   18652 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 19:05:59.483582   18652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:05:59.539071   18652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-01 19:05:59.530206961 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:05:59.539180   18652 docker.go:319] overlay module found
	I1201 19:05:59.540936   18652 out.go:179] * Using the docker driver based on user configuration
	I1201 19:05:59.542113   18652 start.go:309] selected driver: docker
	I1201 19:05:59.542127   18652 start.go:927] validating driver "docker" against <nil>
	I1201 19:05:59.542138   18652 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:05:59.542666   18652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:05:59.596717   18652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-01 19:05:59.587916678 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:05:59.596854   18652 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 19:05:59.597035   18652 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 19:05:59.598694   18652 out.go:179] * Using Docker driver with root privileges
	I1201 19:05:59.599904   18652 cni.go:84] Creating CNI manager for ""
	I1201 19:05:59.599958   18652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 19:05:59.599968   18652 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1201 19:05:59.600028   18652 start.go:353] cluster config:
	{Name:addons-844427 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-844427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1201 19:05:59.601239   18652 out.go:179] * Starting "addons-844427" primary control-plane node in "addons-844427" cluster
	I1201 19:05:59.602316   18652 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 19:05:59.603457   18652 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 19:05:59.604531   18652 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 19:05:59.604561   18652 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 19:05:59.604571   18652 cache.go:65] Caching tarball of preloaded images
	I1201 19:05:59.604609   18652 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 19:05:59.604658   18652 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 19:05:59.604673   18652 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 19:05:59.604998   18652 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/config.json ...
	I1201 19:05:59.605025   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/config.json: {Name:mkc49c9a3396671097648e11753d3c1d4f182d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:59.620629   18652 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1201 19:05:59.620742   18652 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1201 19:05:59.620758   18652 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1201 19:05:59.620763   18652 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1201 19:05:59.620769   18652 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1201 19:05:59.620776   18652 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1201 19:06:13.305751   18652 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1201 19:06:13.305803   18652 cache.go:243] Successfully downloaded all kic artifacts
	I1201 19:06:13.305845   18652 start.go:360] acquireMachinesLock for addons-844427: {Name:mk144e573f21904e0704a69cb6c835a66d7023b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 19:06:13.305974   18652 start.go:364] duration metric: took 104.638µs to acquireMachinesLock for "addons-844427"
	I1201 19:06:13.306009   18652 start.go:93] Provisioning new machine with config: &{Name:addons-844427 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-844427 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 19:06:13.306128   18652 start.go:125] createHost starting for "" (driver="docker")
	I1201 19:06:13.308869   18652 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1201 19:06:13.309103   18652 start.go:159] libmachine.API.Create for "addons-844427" (driver="docker")
	I1201 19:06:13.309152   18652 client.go:173] LocalClient.Create starting
	I1201 19:06:13.309260   18652 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem
	I1201 19:06:13.346411   18652 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem
	I1201 19:06:13.438320   18652 cli_runner.go:164] Run: docker network inspect addons-844427 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1201 19:06:13.456233   18652 cli_runner.go:211] docker network inspect addons-844427 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1201 19:06:13.456323   18652 network_create.go:284] running [docker network inspect addons-844427] to gather additional debugging logs...
	I1201 19:06:13.456341   18652 cli_runner.go:164] Run: docker network inspect addons-844427
	W1201 19:06:13.472025   18652 cli_runner.go:211] docker network inspect addons-844427 returned with exit code 1
	I1201 19:06:13.472059   18652 network_create.go:287] error running [docker network inspect addons-844427]: docker network inspect addons-844427: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-844427 not found
	I1201 19:06:13.472076   18652 network_create.go:289] output of [docker network inspect addons-844427]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-844427 not found
	
	** /stderr **
	I1201 19:06:13.472162   18652 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 19:06:13.488033   18652 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e1ae30}
	I1201 19:06:13.488079   18652 network_create.go:124] attempt to create docker network addons-844427 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1201 19:06:13.488137   18652 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-844427 addons-844427
	I1201 19:06:13.531955   18652 network_create.go:108] docker network addons-844427 192.168.49.0/24 created
	I1201 19:06:13.531982   18652 kic.go:121] calculated static IP "192.168.49.2" for the "addons-844427" container
	I1201 19:06:13.532058   18652 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1201 19:06:13.547549   18652 cli_runner.go:164] Run: docker volume create addons-844427 --label name.minikube.sigs.k8s.io=addons-844427 --label created_by.minikube.sigs.k8s.io=true
	I1201 19:06:13.564252   18652 oci.go:103] Successfully created a docker volume addons-844427
	I1201 19:06:13.564382   18652 cli_runner.go:164] Run: docker run --rm --name addons-844427-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-844427 --entrypoint /usr/bin/test -v addons-844427:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1201 19:06:17.186503   18652 cli_runner.go:217] Completed: docker run --rm --name addons-844427-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-844427 --entrypoint /usr/bin/test -v addons-844427:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (3.622069758s)
	I1201 19:06:17.186545   18652 oci.go:107] Successfully prepared a docker volume addons-844427
	I1201 19:06:17.186597   18652 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 19:06:17.186609   18652 kic.go:194] Starting extracting preloaded images to volume ...
	I1201 19:06:17.186653   18652 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-844427:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1201 19:06:20.990991   18652 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-844427:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.804282602s)
	I1201 19:06:20.991021   18652 kic.go:203] duration metric: took 3.804409241s to extract preloaded images to volume ...
	W1201 19:06:20.991138   18652 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1201 19:06:20.991182   18652 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1201 19:06:20.991218   18652 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1201 19:06:21.048768   18652 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-844427 --name addons-844427 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-844427 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-844427 --network addons-844427 --ip 192.168.49.2 --volume addons-844427:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1201 19:06:21.334086   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Running}}
	I1201 19:06:21.353554   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:21.372742   18652 cli_runner.go:164] Run: docker exec addons-844427 stat /var/lib/dpkg/alternatives/iptables
	I1201 19:06:21.417875   18652 oci.go:144] the created container "addons-844427" has a running status.
	I1201 19:06:21.417900   18652 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa...
	I1201 19:06:21.489966   18652 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1201 19:06:21.516576   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:21.533912   18652 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1201 19:06:21.533934   18652 kic_runner.go:114] Args: [docker exec --privileged addons-844427 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1201 19:06:21.609136   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:21.633330   18652 machine.go:94] provisionDockerMachine start ...
	I1201 19:06:21.633451   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:21.656478   18652 main.go:143] libmachine: Using SSH client type: native
	I1201 19:06:21.656786   18652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1201 19:06:21.656804   18652 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 19:06:21.799650   18652 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-844427
	
	I1201 19:06:21.799686   18652 ubuntu.go:182] provisioning hostname "addons-844427"
	I1201 19:06:21.799761   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:21.819197   18652 main.go:143] libmachine: Using SSH client type: native
	I1201 19:06:21.819455   18652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1201 19:06:21.819476   18652 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-844427 && echo "addons-844427" | sudo tee /etc/hostname
	I1201 19:06:21.967902   18652 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-844427
	
	I1201 19:06:21.967979   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:21.986364   18652 main.go:143] libmachine: Using SSH client type: native
	I1201 19:06:21.986577   18652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1201 19:06:21.986593   18652 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-844427' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-844427/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-844427' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 19:06:22.123679   18652 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 19:06:22.123708   18652 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 19:06:22.123732   18652 ubuntu.go:190] setting up certificates
	I1201 19:06:22.123740   18652 provision.go:84] configureAuth start
	I1201 19:06:22.123783   18652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-844427
	I1201 19:06:22.140560   18652 provision.go:143] copyHostCerts
	I1201 19:06:22.140621   18652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 19:06:22.140741   18652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 19:06:22.140846   18652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 19:06:22.140924   18652 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.addons-844427 san=[127.0.0.1 192.168.49.2 addons-844427 localhost minikube]
	I1201 19:06:22.215513   18652 provision.go:177] copyRemoteCerts
	I1201 19:06:22.215562   18652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 19:06:22.215612   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:22.232480   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:22.330363   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 19:06:22.348248   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1201 19:06:22.364652   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 19:06:22.380460   18652 provision.go:87] duration metric: took 256.709149ms to configureAuth
	I1201 19:06:22.380481   18652 ubuntu.go:206] setting minikube options for container-runtime
	I1201 19:06:22.380657   18652 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:06:22.380780   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:22.397095   18652 main.go:143] libmachine: Using SSH client type: native
	I1201 19:06:22.397395   18652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1201 19:06:22.397424   18652 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 19:06:22.668591   18652 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 19:06:22.668611   18652 machine.go:97] duration metric: took 1.035257294s to provisionDockerMachine
	I1201 19:06:22.668622   18652 client.go:176] duration metric: took 9.359459963s to LocalClient.Create
	I1201 19:06:22.668662   18652 start.go:167] duration metric: took 9.359560986s to libmachine.API.Create "addons-844427"
	I1201 19:06:22.668670   18652 start.go:293] postStartSetup for "addons-844427" (driver="docker")
	I1201 19:06:22.668679   18652 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 19:06:22.668723   18652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 19:06:22.668764   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:22.685615   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:22.785105   18652 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 19:06:22.788465   18652 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 19:06:22.788489   18652 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 19:06:22.788499   18652 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 19:06:22.788547   18652 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 19:06:22.788569   18652 start.go:296] duration metric: took 119.89335ms for postStartSetup
	I1201 19:06:22.788859   18652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-844427
	I1201 19:06:22.805467   18652 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/config.json ...
	I1201 19:06:22.805700   18652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 19:06:22.805736   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:22.822672   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:22.918180   18652 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 19:06:22.922441   18652 start.go:128] duration metric: took 9.616299155s to createHost
	I1201 19:06:22.922465   18652 start.go:83] releasing machines lock for "addons-844427", held for 9.6164769s
	I1201 19:06:22.922523   18652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-844427
	I1201 19:06:22.939414   18652 ssh_runner.go:195] Run: cat /version.json
	I1201 19:06:22.939453   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:22.939537   18652 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 19:06:22.939633   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:22.957222   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:22.958034   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:23.051154   18652 ssh_runner.go:195] Run: systemctl --version
	I1201 19:06:23.102031   18652 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 19:06:23.136493   18652 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 19:06:23.140917   18652 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 19:06:23.140968   18652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 19:06:23.166171   18652 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1201 19:06:23.166193   18652 start.go:496] detecting cgroup driver to use...
	I1201 19:06:23.166225   18652 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 19:06:23.166269   18652 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 19:06:23.181711   18652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 19:06:23.193822   18652 docker.go:218] disabling cri-docker service (if available) ...
	I1201 19:06:23.193881   18652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 19:06:23.210578   18652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 19:06:23.226846   18652 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 19:06:23.302480   18652 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 19:06:23.389350   18652 docker.go:234] disabling docker service ...
	I1201 19:06:23.389418   18652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 19:06:23.406722   18652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 19:06:23.418393   18652 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 19:06:23.499436   18652 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 19:06:23.575012   18652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 19:06:23.586704   18652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 19:06:23.599572   18652 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 19:06:23.599632   18652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.608878   18652 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 19:06:23.608925   18652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.616810   18652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.624791   18652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.633097   18652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 19:06:23.640485   18652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.648489   18652 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.660819   18652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:06:23.669095   18652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 19:06:23.676004   18652 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1201 19:06:23.676066   18652 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1201 19:06:23.687274   18652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 19:06:23.694680   18652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 19:06:23.772279   18652 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 19:06:23.900576   18652 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 19:06:23.900636   18652 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 19:06:23.904311   18652 start.go:564] Will wait 60s for crictl version
	I1201 19:06:23.904364   18652 ssh_runner.go:195] Run: which crictl
	I1201 19:06:23.907668   18652 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 19:06:23.929764   18652 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 19:06:23.929876   18652 ssh_runner.go:195] Run: crio --version
	I1201 19:06:23.956668   18652 ssh_runner.go:195] Run: crio --version
	I1201 19:06:23.984683   18652 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 19:06:23.985858   18652 cli_runner.go:164] Run: docker network inspect addons-844427 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 19:06:24.002507   18652 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1201 19:06:24.006557   18652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 19:06:24.016379   18652 kubeadm.go:884] updating cluster {Name:addons-844427 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-844427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 19:06:24.016494   18652 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 19:06:24.016545   18652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 19:06:24.047539   18652 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 19:06:24.047574   18652 crio.go:433] Images already preloaded, skipping extraction
	I1201 19:06:24.047635   18652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 19:06:24.072232   18652 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 19:06:24.072254   18652 cache_images.go:86] Images are preloaded, skipping loading
	I1201 19:06:24.072262   18652 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1201 19:06:24.072374   18652 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-844427 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-844427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 19:06:24.072447   18652 ssh_runner.go:195] Run: crio config
	I1201 19:06:24.115614   18652 cni.go:84] Creating CNI manager for ""
	I1201 19:06:24.115638   18652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 19:06:24.115657   18652 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 19:06:24.115684   18652 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-844427 NodeName:addons-844427 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 19:06:24.115838   18652 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-844427"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 19:06:24.115915   18652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 19:06:24.124032   18652 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 19:06:24.124092   18652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 19:06:24.132114   18652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1201 19:06:24.144519   18652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 19:06:24.159436   18652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1201 19:06:24.171715   18652 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1201 19:06:24.175327   18652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 19:06:24.184977   18652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 19:06:24.262546   18652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 19:06:24.281523   18652 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427 for IP: 192.168.49.2
	I1201 19:06:24.281546   18652 certs.go:195] generating shared ca certs ...
	I1201 19:06:24.281567   18652 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.281702   18652 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 19:06:24.323486   18652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt ...
	I1201 19:06:24.323514   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt: {Name:mke2dc2bda082d7cec68c315ca42d5e315f550a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.323706   18652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key ...
	I1201 19:06:24.323720   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key: {Name:mkc0680ae5c06e9f83eb9436d2f7fc0a150e26bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.323818   18652 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 19:06:24.400383   18652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt ...
	I1201 19:06:24.400409   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt: {Name:mk09a8428296dedb7a269a80e7a3b1792e56a101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.400568   18652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key ...
	I1201 19:06:24.400579   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key: {Name:mk00840924daeb47b43c83d2f1f1f2e8f48beaa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.400646   18652 certs.go:257] generating profile certs ...
	I1201 19:06:24.400696   18652 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.key
	I1201 19:06:24.400710   18652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt with IP's: []
	I1201 19:06:24.463121   18652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt ...
	I1201 19:06:24.463145   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: {Name:mk2ebc2b87627b12e31a7751c9c82dd1b2ec20df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.463307   18652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.key ...
	I1201 19:06:24.463320   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.key: {Name:mk2dea41988683e567eb325458cbbc7b09e11e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.463399   18652 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.key.40ffae2e
	I1201 19:06:24.463418   18652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.crt.40ffae2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1201 19:06:24.602658   18652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.crt.40ffae2e ...
	I1201 19:06:24.602685   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.crt.40ffae2e: {Name:mkc2344faacd89d4d0688f6c77f1919afa037ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.602844   18652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.key.40ffae2e ...
	I1201 19:06:24.602857   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.key.40ffae2e: {Name:mk7fa3f4e2e4088d9c5aaded46e27351b455ac2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.602929   18652 certs.go:382] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.crt.40ffae2e -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.crt
	I1201 19:06:24.603002   18652 certs.go:386] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.key.40ffae2e -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.key
	I1201 19:06:24.603056   18652 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.key
	I1201 19:06:24.603073   18652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.crt with IP's: []
	I1201 19:06:24.822280   18652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.crt ...
	I1201 19:06:24.822316   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.crt: {Name:mk8c371d73674b41864e81a157290f8bd3fe3d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.822482   18652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.key ...
	I1201 19:06:24.822493   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.key: {Name:mke42070f335d652deae6f54cd0d19f5d1b18e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:24.822661   18652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 19:06:24.822697   18652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 19:06:24.822723   18652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 19:06:24.822746   18652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 19:06:24.823276   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 19:06:24.840677   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 19:06:24.856545   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 19:06:24.872903   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 19:06:24.889175   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1201 19:06:24.905406   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 19:06:24.921714   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 19:06:24.938140   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 19:06:24.954324   18652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 19:06:24.971951   18652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 19:06:24.983889   18652 ssh_runner.go:195] Run: openssl version
	I1201 19:06:24.989774   18652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 19:06:25.000573   18652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 19:06:25.004437   18652 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 19:06:25.004488   18652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 19:06:25.040395   18652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 19:06:25.049542   18652 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 19:06:25.053424   18652 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 19:06:25.053477   18652 kubeadm.go:401] StartCluster: {Name:addons-844427 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-844427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:06:25.053542   18652 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:06:25.053595   18652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:06:25.082295   18652 cri.go:89] found id: ""
	I1201 19:06:25.082354   18652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 19:06:25.090170   18652 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 19:06:25.097732   18652 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 19:06:25.097774   18652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 19:06:25.105123   18652 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 19:06:25.105146   18652 kubeadm.go:158] found existing configuration files:
	
	I1201 19:06:25.105192   18652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 19:06:25.112518   18652 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 19:06:25.112566   18652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 19:06:25.119477   18652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 19:06:25.126702   18652 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 19:06:25.126749   18652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 19:06:25.134461   18652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 19:06:25.141943   18652 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 19:06:25.141999   18652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 19:06:25.149147   18652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 19:06:25.157011   18652 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 19:06:25.157085   18652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 19:06:25.164296   18652 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 19:06:25.220808   18652 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1201 19:06:25.275084   18652 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 19:06:35.124038   18652 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1201 19:06:35.124110   18652 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 19:06:35.124231   18652 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 19:06:35.124332   18652 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1201 19:06:35.124396   18652 kubeadm.go:319] OS: Linux
	I1201 19:06:35.124460   18652 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 19:06:35.124528   18652 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 19:06:35.124601   18652 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 19:06:35.124669   18652 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 19:06:35.124737   18652 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 19:06:35.124786   18652 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 19:06:35.124832   18652 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 19:06:35.124870   18652 kubeadm.go:319] CGROUPS_IO: enabled
	I1201 19:06:35.124947   18652 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 19:06:35.125030   18652 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 19:06:35.125102   18652 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 19:06:35.125162   18652 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 19:06:35.126720   18652 out.go:252]   - Generating certificates and keys ...
	I1201 19:06:35.126786   18652 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 19:06:35.126841   18652 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 19:06:35.126896   18652 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 19:06:35.126950   18652 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 19:06:35.127005   18652 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 19:06:35.127061   18652 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 19:06:35.127112   18652 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 19:06:35.127271   18652 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-844427 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1201 19:06:35.127370   18652 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 19:06:35.127527   18652 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-844427 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1201 19:06:35.127586   18652 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 19:06:35.127641   18652 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 19:06:35.127679   18652 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 19:06:35.127724   18652 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 19:06:35.127766   18652 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 19:06:35.127811   18652 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 19:06:35.127853   18652 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 19:06:35.127928   18652 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 19:06:35.127993   18652 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 19:06:35.128102   18652 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 19:06:35.128170   18652 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 19:06:35.129469   18652 out.go:252]   - Booting up control plane ...
	I1201 19:06:35.129547   18652 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 19:06:35.129612   18652 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 19:06:35.129666   18652 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 19:06:35.129771   18652 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 19:06:35.129858   18652 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 19:06:35.129942   18652 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 19:06:35.130028   18652 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 19:06:35.130074   18652 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 19:06:35.130182   18652 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 19:06:35.130278   18652 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 19:06:35.130343   18652 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001095894s
	I1201 19:06:35.130416   18652 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 19:06:35.130491   18652 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1201 19:06:35.130567   18652 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 19:06:35.130639   18652 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 19:06:35.130699   18652 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.723809939s
	I1201 19:06:35.130757   18652 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.018008917s
	I1201 19:06:35.130810   18652 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.500986328s
	I1201 19:06:35.130929   18652 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1201 19:06:35.131047   18652 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1201 19:06:35.131103   18652 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1201 19:06:35.131273   18652 kubeadm.go:319] [mark-control-plane] Marking the node addons-844427 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1201 19:06:35.131365   18652 kubeadm.go:319] [bootstrap-token] Using token: gyuws6.vlonq0lhcrfslwtv
	I1201 19:06:35.133603   18652 out.go:252]   - Configuring RBAC rules ...
	I1201 19:06:35.133692   18652 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1201 19:06:35.133776   18652 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1201 19:06:35.133901   18652 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1201 19:06:35.134019   18652 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1201 19:06:35.134122   18652 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1201 19:06:35.134208   18652 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1201 19:06:35.134314   18652 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1201 19:06:35.134352   18652 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1201 19:06:35.134397   18652 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1201 19:06:35.134405   18652 kubeadm.go:319] 
	I1201 19:06:35.134458   18652 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1201 19:06:35.134464   18652 kubeadm.go:319] 
	I1201 19:06:35.134536   18652 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1201 19:06:35.134542   18652 kubeadm.go:319] 
	I1201 19:06:35.134562   18652 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1201 19:06:35.134613   18652 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1201 19:06:35.134657   18652 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1201 19:06:35.134665   18652 kubeadm.go:319] 
	I1201 19:06:35.134715   18652 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1201 19:06:35.134720   18652 kubeadm.go:319] 
	I1201 19:06:35.134763   18652 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1201 19:06:35.134769   18652 kubeadm.go:319] 
	I1201 19:06:35.134812   18652 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1201 19:06:35.134882   18652 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1201 19:06:35.134940   18652 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1201 19:06:35.134946   18652 kubeadm.go:319] 
	I1201 19:06:35.135023   18652 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1201 19:06:35.135096   18652 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1201 19:06:35.135107   18652 kubeadm.go:319] 
	I1201 19:06:35.135203   18652 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token gyuws6.vlonq0lhcrfslwtv \
	I1201 19:06:35.135318   18652 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:06dc444a5ab7086c8ab7763a4467d2ea9bcadb3b2da24d1940c0cca14b3cdc8a \
	I1201 19:06:35.135344   18652 kubeadm.go:319] 	--control-plane 
	I1201 19:06:35.135348   18652 kubeadm.go:319] 
	I1201 19:06:35.135420   18652 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1201 19:06:35.135427   18652 kubeadm.go:319] 
	I1201 19:06:35.135490   18652 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token gyuws6.vlonq0lhcrfslwtv \
	I1201 19:06:35.135585   18652 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:06dc444a5ab7086c8ab7763a4467d2ea9bcadb3b2da24d1940c0cca14b3cdc8a 
	I1201 19:06:35.135598   18652 cni.go:84] Creating CNI manager for ""
	I1201 19:06:35.135607   18652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 19:06:35.137112   18652 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1201 19:06:35.138402   18652 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1201 19:06:35.142762   18652 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1201 19:06:35.142780   18652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1201 19:06:35.155780   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1201 19:06:35.356084   18652 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1201 19:06:35.356168   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:35.356196   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-844427 minikube.k8s.io/updated_at=2025_12_01T19_06_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9 minikube.k8s.io/name=addons-844427 minikube.k8s.io/primary=true
	I1201 19:06:35.367447   18652 ops.go:34] apiserver oom_adj: -16
	I1201 19:06:35.437771   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:35.937905   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:36.438209   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:36.938520   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:37.438660   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:37.938270   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:38.438501   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:38.938719   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:39.438877   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:39.938800   18652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:40.009185   18652 kubeadm.go:1114] duration metric: took 4.653078497s to wait for elevateKubeSystemPrivileges
	I1201 19:06:40.009222   18652 kubeadm.go:403] duration metric: took 14.9557493s to StartCluster
	I1201 19:06:40.009242   18652 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:40.009414   18652 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:06:40.009838   18652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:40.010033   18652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1201 19:06:40.010043   18652 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 19:06:40.010132   18652 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1201 19:06:40.010253   18652 addons.go:70] Setting yakd=true in profile "addons-844427"
	I1201 19:06:40.010261   18652 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:06:40.010276   18652 addons.go:239] Setting addon yakd=true in "addons-844427"
	I1201 19:06:40.010278   18652 addons.go:70] Setting inspektor-gadget=true in profile "addons-844427"
	I1201 19:06:40.010309   18652 addons.go:70] Setting registry-creds=true in profile "addons-844427"
	I1201 19:06:40.010321   18652 addons.go:239] Setting addon inspektor-gadget=true in "addons-844427"
	I1201 19:06:40.010333   18652 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-844427"
	I1201 19:06:40.010322   18652 addons.go:70] Setting default-storageclass=true in profile "addons-844427"
	I1201 19:06:40.010339   18652 addons.go:70] Setting volumesnapshots=true in profile "addons-844427"
	I1201 19:06:40.010344   18652 addons.go:70] Setting metrics-server=true in profile "addons-844427"
	I1201 19:06:40.010351   18652 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-844427"
	I1201 19:06:40.010355   18652 addons.go:239] Setting addon volumesnapshots=true in "addons-844427"
	I1201 19:06:40.010358   18652 addons.go:239] Setting addon metrics-server=true in "addons-844427"
	I1201 19:06:40.010365   18652 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-844427"
	I1201 19:06:40.010369   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.010374   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.010375   18652 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-844427"
	I1201 19:06:40.010388   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.010413   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.010463   18652 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-844427"
	I1201 19:06:40.010488   18652 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-844427"
	I1201 19:06:40.010510   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.010628   18652 addons.go:70] Setting registry=true in profile "addons-844427"
	I1201 19:06:40.010642   18652 addons.go:239] Setting addon registry=true in "addons-844427"
	I1201 19:06:40.010663   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.010830   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.010928   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.010943   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.010963   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.011086   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.011377   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.010333   18652 addons.go:239] Setting addon registry-creds=true in "addons-844427"
	I1201 19:06:40.011892   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.012363   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.010355   18652 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-844427"
	I1201 19:06:40.010330   18652 addons.go:70] Setting volcano=true in profile "addons-844427"
	I1201 19:06:40.012614   18652 addons.go:70] Setting gcp-auth=true in profile "addons-844427"
	I1201 19:06:40.010322   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.012637   18652 mustload.go:66] Loading cluster: addons-844427
	I1201 19:06:40.012687   18652 addons.go:70] Setting cloud-spanner=true in profile "addons-844427"
	I1201 19:06:40.012702   18652 addons.go:239] Setting addon cloud-spanner=true in "addons-844427"
	I1201 19:06:40.012725   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.012794   18652 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-844427"
	I1201 19:06:40.012853   18652 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-844427"
	I1201 19:06:40.012883   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.013045   18652 addons.go:70] Setting ingress=true in profile "addons-844427"
	I1201 19:06:40.013069   18652 addons.go:239] Setting addon ingress=true in "addons-844427"
	I1201 19:06:40.013103   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.013159   18652 addons.go:70] Setting storage-provisioner=true in profile "addons-844427"
	I1201 19:06:40.013180   18652 addons.go:239] Setting addon storage-provisioner=true in "addons-844427"
	I1201 19:06:40.013203   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.012618   18652 addons.go:239] Setting addon volcano=true in "addons-844427"
	I1201 19:06:40.013244   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.013386   18652 addons.go:70] Setting ingress-dns=true in profile "addons-844427"
	I1201 19:06:40.013427   18652 addons.go:239] Setting addon ingress-dns=true in "addons-844427"
	I1201 19:06:40.013454   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.016622   18652 out.go:179] * Verifying Kubernetes components...
	I1201 19:06:40.018763   18652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 19:06:40.022727   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.023394   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.023987   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.024339   18652 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:06:40.024412   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.024589   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.024605   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.024664   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.025054   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.027467   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.029113   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.054449   18652 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-844427"
	I1201 19:06:40.054499   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.054948   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.056828   18652 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1201 19:06:40.058346   18652 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1201 19:06:40.058680   18652 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1201 19:06:40.058751   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1201 19:06:40.058849   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.061231   18652 out.go:179]   - Using image docker.io/registry:3.0.0
	I1201 19:06:40.064938   18652 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1201 19:06:40.065705   18652 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1201 19:06:40.065745   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1201 19:06:40.065830   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.066875   18652 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1201 19:06:40.066933   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1201 19:06:40.067040   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.089195   18652 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1201 19:06:40.090355   18652 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1201 19:06:40.090413   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1201 19:06:40.090506   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	W1201 19:06:40.105604   18652 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1201 19:06:40.109594   18652 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1201 19:06:40.110871   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1201 19:06:40.110927   18652 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1201 19:06:40.110942   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1201 19:06:40.111018   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.112183   18652 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1201 19:06:40.112197   18652 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1201 19:06:40.112246   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.114107   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.116954   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1201 19:06:40.119941   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1201 19:06:40.121386   18652 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1201 19:06:40.127275   18652 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1201 19:06:40.127315   18652 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1201 19:06:40.127383   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.134580   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1201 19:06:40.134606   18652 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 19:06:40.134681   18652 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1201 19:06:40.139638   18652 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1201 19:06:40.139663   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1201 19:06:40.139724   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.139880   18652 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 19:06:40.139891   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 19:06:40.139948   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.140169   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1201 19:06:40.141647   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1201 19:06:40.143157   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.144632   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1201 19:06:40.145534   18652 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1201 19:06:40.146696   18652 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1201 19:06:40.146832   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1201 19:06:40.147105   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.148701   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.149771   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1201 19:06:40.151061   18652 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1201 19:06:40.151719   18652 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1201 19:06:40.152384   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1201 19:06:40.152430   18652 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1201 19:06:40.152549   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.155260   18652 out.go:179]   - Using image docker.io/busybox:stable
	I1201 19:06:40.157817   18652 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1201 19:06:40.157884   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1201 19:06:40.157971   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.162443   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.166750   18652 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 19:06:40.168632   18652 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 19:06:40.169865   18652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1201 19:06:40.170218   18652 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1201 19:06:40.171604   18652 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1201 19:06:40.172865   18652 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1201 19:06:40.172887   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1201 19:06:40.173170   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.173382   18652 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1201 19:06:40.173394   18652 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1201 19:06:40.173439   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.177348   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.184904   18652 addons.go:239] Setting addon default-storageclass=true in "addons-844427"
	I1201 19:06:40.184962   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:40.185471   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:40.211255   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.217969   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.218526   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.223832   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.229052   18652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 19:06:40.230381   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.234553   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.237214   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.237439   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.238618   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	W1201 19:06:40.242157   18652 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1201 19:06:40.242252   18652 retry.go:31] will retry after 246.035147ms: ssh: handshake failed: EOF
	I1201 19:06:40.244959   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.248005   18652 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 19:06:40.248061   18652 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 19:06:40.248140   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:40.279897   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:40.328645   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1201 19:06:40.353736   18652 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1201 19:06:40.353755   18652 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1201 19:06:40.370174   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1201 19:06:40.373302   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1201 19:06:40.378144   18652 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1201 19:06:40.378174   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1201 19:06:40.387152   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1201 19:06:40.393602   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1201 19:06:40.394278   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1201 19:06:40.403858   18652 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1201 19:06:40.403894   18652 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1201 19:06:40.405859   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1201 19:06:40.407621   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1201 19:06:40.434914   18652 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1201 19:06:40.434937   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1201 19:06:40.444652   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 19:06:40.446540   18652 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1201 19:06:40.446616   18652 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1201 19:06:40.451249   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 19:06:40.452574   18652 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1201 19:06:40.452592   18652 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1201 19:06:40.475892   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1201 19:06:40.475918   18652 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1201 19:06:40.488345   18652 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1201 19:06:40.488422   18652 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1201 19:06:40.504673   18652 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1201 19:06:40.504706   18652 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1201 19:06:40.507481   18652 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1201 19:06:40.507499   18652 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1201 19:06:40.524329   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1201 19:06:40.524357   18652 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1201 19:06:40.535374   18652 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1201 19:06:40.536694   18652 node_ready.go:35] waiting up to 6m0s for node "addons-844427" to be "Ready" ...
	I1201 19:06:40.538814   18652 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1201 19:06:40.538834   18652 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1201 19:06:40.559103   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1201 19:06:40.559128   18652 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1201 19:06:40.569968   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1201 19:06:40.569997   18652 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1201 19:06:40.579639   18652 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1201 19:06:40.579669   18652 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1201 19:06:40.597302   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1201 19:06:40.604590   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1201 19:06:40.604618   18652 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1201 19:06:40.613593   18652 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 19:06:40.613614   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1201 19:06:40.639481   18652 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1201 19:06:40.639559   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1201 19:06:40.645692   18652 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1201 19:06:40.645762   18652 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1201 19:06:40.668385   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 19:06:40.683439   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1201 19:06:40.697095   18652 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1201 19:06:40.697120   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1201 19:06:40.705802   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1201 19:06:40.755708   18652 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1201 19:06:40.755757   18652 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1201 19:06:40.810359   18652 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1201 19:06:40.810382   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1201 19:06:40.864793   18652 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1201 19:06:40.864823   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1201 19:06:40.899547   18652 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1201 19:06:40.899573   18652 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1201 19:06:40.977834   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1201 19:06:41.041651   18652 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-844427" context rescaled to 1 replicas
	I1201 19:06:41.569174   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.161516722s)
	I1201 19:06:41.569203   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.163309719s)
	I1201 19:06:41.569212   18652 addons.go:495] Verifying addon registry=true in "addons-844427"
	I1201 19:06:41.569226   18652 addons.go:495] Verifying addon ingress=true in "addons-844427"
	I1201 19:06:41.569260   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124522929s)
	I1201 19:06:41.569371   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.118091532s)
	I1201 19:06:41.569457   18652 addons.go:495] Verifying addon metrics-server=true in "addons-844427"
	I1201 19:06:41.571468   18652 out.go:179] * Verifying ingress addon...
	I1201 19:06:41.571478   18652 out.go:179] * Verifying registry addon...
	I1201 19:06:41.573375   18652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1201 19:06:41.573393   18652 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1201 19:06:41.575632   18652 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1201 19:06:41.575649   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:41.575775   18652 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1201 19:06:41.575794   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:42.003820   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.320337499s)
	I1201 19:06:42.003858   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.335432908s)
	W1201 19:06:42.003892   18652 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1201 19:06:42.003965   18652 retry.go:31] will retry after 244.660599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1201 19:06:42.003917   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.298078797s)
	I1201 19:06:42.004213   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.026344329s)
	I1201 19:06:42.004242   18652 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-844427"
	I1201 19:06:42.005482   18652 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-844427 service yakd-dashboard -n yakd-dashboard
	
	I1201 19:06:42.006429   18652 out.go:179] * Verifying csi-hostpath-driver addon...
	I1201 19:06:42.009403   18652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1201 19:06:42.013157   18652 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1201 19:06:42.013177   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:42.112745   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:42.112897   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:42.249102   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 19:06:42.512389   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:42.539541   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:42.613459   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:42.613632   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:43.013197   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:43.113504   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:43.113666   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:43.512165   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:43.576387   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:43.576400   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:44.012608   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:44.076621   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:44.076665   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:44.514225   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:44.615678   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:44.615860   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:44.672863   18652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.423713426s)
	I1201 19:06:45.012696   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:45.039862   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:45.113581   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:45.113677   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:45.512159   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:45.612916   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:45.613113   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:46.012624   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:46.076692   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:46.076886   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:46.513219   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:46.614389   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:46.614410   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:47.012983   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:47.082074   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:47.082228   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:47.513173   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:47.539001   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:47.613834   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:47.613888   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:47.720458   18652 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1201 19:06:47.720533   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:47.737916   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:47.842312   18652 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1201 19:06:47.854853   18652 addons.go:239] Setting addon gcp-auth=true in "addons-844427"
	I1201 19:06:47.854918   18652 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:06:47.855253   18652 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:06:47.872349   18652 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1201 19:06:47.872398   18652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:06:47.889127   18652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:06:47.984935   18652 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 19:06:47.986188   18652 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1201 19:06:47.987270   18652 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1201 19:06:47.987317   18652 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1201 19:06:48.000264   18652 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1201 19:06:48.000306   18652 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1201 19:06:48.012713   18652 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1201 19:06:48.012732   18652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1201 19:06:48.014845   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:48.025338   18652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1201 19:06:48.076070   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:48.076242   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:48.319169   18652 addons.go:495] Verifying addon gcp-auth=true in "addons-844427"
	I1201 19:06:48.320510   18652 out.go:179] * Verifying gcp-auth addon...
	I1201 19:06:48.322314   18652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1201 19:06:48.324385   18652 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1201 19:06:48.324404   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:48.511972   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:48.576680   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:48.576887   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:48.825438   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:49.012365   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:49.113076   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:49.113284   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:49.325609   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:49.512980   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:49.539428   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:49.576955   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:49.577108   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:49.825657   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:50.013467   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:50.076398   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:50.076575   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:50.325380   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:50.511901   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:50.576426   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:50.576570   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:50.825354   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:51.013336   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:51.076143   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:51.076160   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:51.325621   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:51.512602   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:51.539807   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:51.576160   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:51.576305   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:51.825766   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:52.012999   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:52.076674   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:52.076939   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:52.325348   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:52.511961   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:52.576628   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:52.576783   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:52.825416   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:53.013781   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:53.076093   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:53.076277   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:53.325159   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:53.513335   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:53.576856   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:53.576933   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:53.825921   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:54.013814   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:54.039698   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:54.075932   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:54.076054   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:54.325756   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:54.512555   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:54.576376   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:54.576591   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:54.825402   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:55.013470   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:55.076125   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:55.076379   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:55.324857   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:55.514914   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:55.576705   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:55.576924   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:55.825363   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:56.013931   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:56.076793   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:56.076865   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:56.325378   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:56.511870   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:56.538890   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:56.576331   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:56.576521   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:56.825751   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:57.013766   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:57.076958   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:57.076960   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:57.325733   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:57.512692   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:57.576078   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:57.576257   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:57.825807   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:58.013390   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:58.076317   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:58.076465   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:58.324955   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:58.512660   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:06:58.539734   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:06:58.576072   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:58.576337   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:58.824577   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:59.012157   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:59.075752   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:59.075864   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:59.325525   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:59.512934   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:59.576738   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:59.576863   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:59.825187   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:00.013118   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:00.076080   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:00.076092   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:00.325624   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:00.512141   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:00.576004   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:00.576119   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:00.825560   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:01.013934   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:01.039158   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:01.076516   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:01.076627   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:01.325168   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:01.513315   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:01.575992   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:01.576200   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:01.825605   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:02.013229   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:02.076135   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:02.076311   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:02.324861   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:02.512453   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:02.576502   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:02.576575   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:02.824825   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:03.014091   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:03.076434   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:03.076676   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:03.325197   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:03.512818   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:03.539072   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:03.576450   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:03.576526   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:03.825228   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:04.013756   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:04.076370   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:04.076451   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:04.324726   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:04.512371   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:04.576336   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:04.576501   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:04.824942   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:05.013510   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:05.075743   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:05.075910   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:05.325477   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:05.512252   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:05.539440   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:05.576711   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:05.576886   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:05.825476   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:06.013130   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:06.075654   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:06.075923   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:06.325270   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:06.512782   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:06.576277   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:06.576404   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:06.824840   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:07.013518   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:07.075945   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:07.076096   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:07.325630   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:07.512015   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:07.576416   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:07.576470   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:07.824970   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:08.014742   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:08.039702   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:08.075861   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:08.076109   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:08.325609   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:08.512316   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:08.576226   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:08.576238   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:08.825494   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:09.012031   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:09.076298   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:09.076393   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:09.324754   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:09.512221   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:09.576639   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:09.576846   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:09.825412   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:10.013578   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:10.076185   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:10.076325   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:10.324880   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:10.512502   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:10.539826   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:10.576382   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:10.576459   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:10.824990   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:11.013740   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:11.076412   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:11.076467   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:11.324879   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:11.512748   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:11.576411   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:11.576606   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:11.825199   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:12.013281   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:12.076487   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:12.076678   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:12.325030   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:12.512694   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:12.539991   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:12.576203   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:12.576446   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:12.825622   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:13.013609   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:13.076184   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:13.076311   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:13.324533   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:13.512221   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:13.575723   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:13.575929   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:13.825767   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:14.015015   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:14.076760   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:14.076791   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:14.325495   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:14.512074   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:14.575965   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:14.576172   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:14.825661   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:15.013373   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:15.039352   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:15.076659   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:15.076790   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:15.325231   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:15.513091   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:15.575753   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:15.575925   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:15.825447   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:16.013030   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:16.075751   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:16.075910   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:16.325567   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:16.512017   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:16.576337   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:16.576530   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:16.824974   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:17.013552   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:17.039734   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:17.075881   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:17.076110   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:17.325802   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:17.512395   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:17.575977   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:17.576079   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:17.825208   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:18.013621   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:18.076482   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:18.076599   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:18.325047   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:18.512639   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:18.576128   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:18.576327   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:18.825647   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:19.014173   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:19.076650   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:19.076879   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:19.325228   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:19.512015   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:19.539343   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:19.576582   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:19.576730   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:19.825215   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:20.013677   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:20.076509   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:20.076564   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:20.324961   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:20.512666   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:20.576351   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:20.576480   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:20.824877   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:21.014176   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:21.076753   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:21.076839   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:21.325277   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:21.512755   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1201 19:07:21.540028   18652 node_ready.go:57] node "addons-844427" has "Ready":"False" status (will retry)
	I1201 19:07:21.576624   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:21.576775   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:21.825315   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:22.014056   18652 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1201 19:07:22.014121   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:22.042663   18652 node_ready.go:49] node "addons-844427" is "Ready"
	I1201 19:07:22.042697   18652 node_ready.go:38] duration metric: took 41.505967081s for node "addons-844427" to be "Ready" ...
	I1201 19:07:22.042712   18652 api_server.go:52] waiting for apiserver process to appear ...
	I1201 19:07:22.042773   18652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 19:07:22.063926   18652 api_server.go:72] duration metric: took 42.053851261s to wait for apiserver process to appear ...
	I1201 19:07:22.063960   18652 api_server.go:88] waiting for apiserver healthz status ...
	I1201 19:07:22.063980   18652 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1201 19:07:22.068917   18652 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1201 19:07:22.069861   18652 api_server.go:141] control plane version: v1.34.2
	I1201 19:07:22.069889   18652 api_server.go:131] duration metric: took 5.922111ms to wait for apiserver health ...
	I1201 19:07:22.069901   18652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 19:07:22.072978   18652 system_pods.go:59] 20 kube-system pods found
	I1201 19:07:22.073003   18652 system_pods.go:61] "amd-gpu-device-plugin-wbc9c" [6ca4c03d-f88e-406c-b3e8-b6bcfbe29679] Pending
	I1201 19:07:22.073011   18652 system_pods.go:61] "coredns-66bc5c9577-kt5tx" [264990f1-f9da-44b2-ad29-b8cdcecb9afb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 19:07:22.073017   18652 system_pods.go:61] "csi-hostpath-attacher-0" [0c3538f8-06a8-4fa3-b51d-a5e520c50e99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 19:07:22.073025   18652 system_pods.go:61] "csi-hostpath-resizer-0" [1db28f9e-10a7-4f49-bcf0-86998196b714] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 19:07:22.073030   18652 system_pods.go:61] "csi-hostpathplugin-84njl" [c23cedcd-a53d-41cf-9118-65184e70cdc3] Pending
	I1201 19:07:22.073034   18652 system_pods.go:61] "etcd-addons-844427" [177151f8-ecd3-4545-9a62-01d57af0366b] Running
	I1201 19:07:22.073037   18652 system_pods.go:61] "kindnet-p8gkr" [499a9c16-5c7c-48c6-a18f-3ecb339b2c70] Running
	I1201 19:07:22.073041   18652 system_pods.go:61] "kube-apiserver-addons-844427" [91316d2a-487b-4a2e-af31-70574739fa1a] Running
	I1201 19:07:22.073046   18652 system_pods.go:61] "kube-controller-manager-addons-844427" [828949ee-77a1-43de-837c-f1dbfcf2b113] Running
	I1201 19:07:22.073052   18652 system_pods.go:61] "kube-ingress-dns-minikube" [fe7698cc-abf6-4874-96ee-f8997a752123] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 19:07:22.073056   18652 system_pods.go:61] "kube-proxy-7w28c" [0835d5c6-1a10-4422-b30e-4221ef70767e] Running
	I1201 19:07:22.073062   18652 system_pods.go:61] "kube-scheduler-addons-844427" [26545d56-e884-4a86-9c4f-ac0fc2a96bf4] Running
	I1201 19:07:22.073069   18652 system_pods.go:61] "metrics-server-85b7d694d7-xs4wl" [211a4016-77bf-43b3-8765-24567cae6b45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 19:07:22.073072   18652 system_pods.go:61] "nvidia-device-plugin-daemonset-v667z" [444c689e-7ffe-4f0d-8b96-34c161bc1ef5] Pending
	I1201 19:07:22.073080   18652 system_pods.go:61] "registry-6b586f9694-g722r" [aab1ac21-3d9b-432a-9c79-77419a1e6c3e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 19:07:22.073085   18652 system_pods.go:61] "registry-creds-764b6fb674-sqhck" [f6be056a-d2f0-4bd2-a225-0755fd0d6439] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 19:07:22.073090   18652 system_pods.go:61] "registry-proxy-q7742" [f6fe9017-d264-4a76-a4d4-9947815e6804] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 19:07:22.073096   18652 system_pods.go:61] "snapshot-controller-7d9fbc56b8-977vf" [650a675e-0f4d-4749-9455-36a2f0b18162] Pending
	I1201 19:07:22.073099   18652 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9ghp7" [a06ad8ab-2315-4eb7-8ca7-9e9838ceb101] Pending
	I1201 19:07:22.073104   18652 system_pods.go:61] "storage-provisioner" [ab094890-359e-4017-b2e7-33117da16c40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 19:07:22.073117   18652 system_pods.go:74] duration metric: took 3.210649ms to wait for pod list to return data ...
	I1201 19:07:22.073124   18652 default_sa.go:34] waiting for default service account to be created ...
	I1201 19:07:22.075079   18652 default_sa.go:45] found service account: "default"
	I1201 19:07:22.075098   18652 default_sa.go:55] duration metric: took 1.967093ms for default service account to be created ...
	I1201 19:07:22.075107   18652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 19:07:22.076781   18652 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1201 19:07:22.076799   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:22.077146   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:22.079384   18652 system_pods.go:86] 20 kube-system pods found
	I1201 19:07:22.079412   18652 system_pods.go:89] "amd-gpu-device-plugin-wbc9c" [6ca4c03d-f88e-406c-b3e8-b6bcfbe29679] Pending
	I1201 19:07:22.079428   18652 system_pods.go:89] "coredns-66bc5c9577-kt5tx" [264990f1-f9da-44b2-ad29-b8cdcecb9afb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 19:07:22.079437   18652 system_pods.go:89] "csi-hostpath-attacher-0" [0c3538f8-06a8-4fa3-b51d-a5e520c50e99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 19:07:22.079451   18652 system_pods.go:89] "csi-hostpath-resizer-0" [1db28f9e-10a7-4f49-bcf0-86998196b714] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 19:07:22.079457   18652 system_pods.go:89] "csi-hostpathplugin-84njl" [c23cedcd-a53d-41cf-9118-65184e70cdc3] Pending
	I1201 19:07:22.079463   18652 system_pods.go:89] "etcd-addons-844427" [177151f8-ecd3-4545-9a62-01d57af0366b] Running
	I1201 19:07:22.079469   18652 system_pods.go:89] "kindnet-p8gkr" [499a9c16-5c7c-48c6-a18f-3ecb339b2c70] Running
	I1201 19:07:22.079476   18652 system_pods.go:89] "kube-apiserver-addons-844427" [91316d2a-487b-4a2e-af31-70574739fa1a] Running
	I1201 19:07:22.079486   18652 system_pods.go:89] "kube-controller-manager-addons-844427" [828949ee-77a1-43de-837c-f1dbfcf2b113] Running
	I1201 19:07:22.079496   18652 system_pods.go:89] "kube-ingress-dns-minikube" [fe7698cc-abf6-4874-96ee-f8997a752123] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 19:07:22.079502   18652 system_pods.go:89] "kube-proxy-7w28c" [0835d5c6-1a10-4422-b30e-4221ef70767e] Running
	I1201 19:07:22.079510   18652 system_pods.go:89] "kube-scheduler-addons-844427" [26545d56-e884-4a86-9c4f-ac0fc2a96bf4] Running
	I1201 19:07:22.079546   18652 system_pods.go:89] "metrics-server-85b7d694d7-xs4wl" [211a4016-77bf-43b3-8765-24567cae6b45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 19:07:22.079556   18652 system_pods.go:89] "nvidia-device-plugin-daemonset-v667z" [444c689e-7ffe-4f0d-8b96-34c161bc1ef5] Pending
	I1201 19:07:22.079566   18652 system_pods.go:89] "registry-6b586f9694-g722r" [aab1ac21-3d9b-432a-9c79-77419a1e6c3e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 19:07:22.079589   18652 system_pods.go:89] "registry-creds-764b6fb674-sqhck" [f6be056a-d2f0-4bd2-a225-0755fd0d6439] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 19:07:22.079655   18652 system_pods.go:89] "registry-proxy-q7742" [f6fe9017-d264-4a76-a4d4-9947815e6804] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 19:07:22.079702   18652 system_pods.go:89] "snapshot-controller-7d9fbc56b8-977vf" [650a675e-0f4d-4749-9455-36a2f0b18162] Pending
	I1201 19:07:22.079727   18652 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9ghp7" [a06ad8ab-2315-4eb7-8ca7-9e9838ceb101] Pending
	I1201 19:07:22.079743   18652 system_pods.go:89] "storage-provisioner" [ab094890-359e-4017-b2e7-33117da16c40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 19:07:22.079760   18652 retry.go:31] will retry after 204.891889ms: missing components: kube-dns
	I1201 19:07:22.290187   18652 system_pods.go:86] 20 kube-system pods found
	I1201 19:07:22.290229   18652 system_pods.go:89] "amd-gpu-device-plugin-wbc9c" [6ca4c03d-f88e-406c-b3e8-b6bcfbe29679] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1201 19:07:22.290240   18652 system_pods.go:89] "coredns-66bc5c9577-kt5tx" [264990f1-f9da-44b2-ad29-b8cdcecb9afb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 19:07:22.290250   18652 system_pods.go:89] "csi-hostpath-attacher-0" [0c3538f8-06a8-4fa3-b51d-a5e520c50e99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 19:07:22.290259   18652 system_pods.go:89] "csi-hostpath-resizer-0" [1db28f9e-10a7-4f49-bcf0-86998196b714] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 19:07:22.290267   18652 system_pods.go:89] "csi-hostpathplugin-84njl" [c23cedcd-a53d-41cf-9118-65184e70cdc3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1201 19:07:22.290278   18652 system_pods.go:89] "etcd-addons-844427" [177151f8-ecd3-4545-9a62-01d57af0366b] Running
	I1201 19:07:22.290302   18652 system_pods.go:89] "kindnet-p8gkr" [499a9c16-5c7c-48c6-a18f-3ecb339b2c70] Running
	I1201 19:07:22.290316   18652 system_pods.go:89] "kube-apiserver-addons-844427" [91316d2a-487b-4a2e-af31-70574739fa1a] Running
	I1201 19:07:22.290322   18652 system_pods.go:89] "kube-controller-manager-addons-844427" [828949ee-77a1-43de-837c-f1dbfcf2b113] Running
	I1201 19:07:22.290330   18652 system_pods.go:89] "kube-ingress-dns-minikube" [fe7698cc-abf6-4874-96ee-f8997a752123] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 19:07:22.290335   18652 system_pods.go:89] "kube-proxy-7w28c" [0835d5c6-1a10-4422-b30e-4221ef70767e] Running
	I1201 19:07:22.290341   18652 system_pods.go:89] "kube-scheduler-addons-844427" [26545d56-e884-4a86-9c4f-ac0fc2a96bf4] Running
	I1201 19:07:22.290349   18652 system_pods.go:89] "metrics-server-85b7d694d7-xs4wl" [211a4016-77bf-43b3-8765-24567cae6b45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 19:07:22.290362   18652 system_pods.go:89] "nvidia-device-plugin-daemonset-v667z" [444c689e-7ffe-4f0d-8b96-34c161bc1ef5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1201 19:07:22.290371   18652 system_pods.go:89] "registry-6b586f9694-g722r" [aab1ac21-3d9b-432a-9c79-77419a1e6c3e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 19:07:22.290378   18652 system_pods.go:89] "registry-creds-764b6fb674-sqhck" [f6be056a-d2f0-4bd2-a225-0755fd0d6439] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 19:07:22.290395   18652 system_pods.go:89] "registry-proxy-q7742" [f6fe9017-d264-4a76-a4d4-9947815e6804] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 19:07:22.290407   18652 system_pods.go:89] "snapshot-controller-7d9fbc56b8-977vf" [650a675e-0f4d-4749-9455-36a2f0b18162] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 19:07:22.290422   18652 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9ghp7" [a06ad8ab-2315-4eb7-8ca7-9e9838ceb101] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 19:07:22.290435   18652 system_pods.go:89] "storage-provisioner" [ab094890-359e-4017-b2e7-33117da16c40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 19:07:22.290453   18652 retry.go:31] will retry after 320.941489ms: missing components: kube-dns
	I1201 19:07:22.389122   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:22.512911   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:22.613758   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:22.613841   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:22.615655   18652 system_pods.go:86] 20 kube-system pods found
	I1201 19:07:22.615678   18652 system_pods.go:89] "amd-gpu-device-plugin-wbc9c" [6ca4c03d-f88e-406c-b3e8-b6bcfbe29679] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1201 19:07:22.615684   18652 system_pods.go:89] "coredns-66bc5c9577-kt5tx" [264990f1-f9da-44b2-ad29-b8cdcecb9afb] Running
	I1201 19:07:22.615690   18652 system_pods.go:89] "csi-hostpath-attacher-0" [0c3538f8-06a8-4fa3-b51d-a5e520c50e99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 19:07:22.615695   18652 system_pods.go:89] "csi-hostpath-resizer-0" [1db28f9e-10a7-4f49-bcf0-86998196b714] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 19:07:22.615702   18652 system_pods.go:89] "csi-hostpathplugin-84njl" [c23cedcd-a53d-41cf-9118-65184e70cdc3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1201 19:07:22.615710   18652 system_pods.go:89] "etcd-addons-844427" [177151f8-ecd3-4545-9a62-01d57af0366b] Running
	I1201 19:07:22.615718   18652 system_pods.go:89] "kindnet-p8gkr" [499a9c16-5c7c-48c6-a18f-3ecb339b2c70] Running
	I1201 19:07:22.615722   18652 system_pods.go:89] "kube-apiserver-addons-844427" [91316d2a-487b-4a2e-af31-70574739fa1a] Running
	I1201 19:07:22.615725   18652 system_pods.go:89] "kube-controller-manager-addons-844427" [828949ee-77a1-43de-837c-f1dbfcf2b113] Running
	I1201 19:07:22.615732   18652 system_pods.go:89] "kube-ingress-dns-minikube" [fe7698cc-abf6-4874-96ee-f8997a752123] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 19:07:22.615739   18652 system_pods.go:89] "kube-proxy-7w28c" [0835d5c6-1a10-4422-b30e-4221ef70767e] Running
	I1201 19:07:22.615743   18652 system_pods.go:89] "kube-scheduler-addons-844427" [26545d56-e884-4a86-9c4f-ac0fc2a96bf4] Running
	I1201 19:07:22.615748   18652 system_pods.go:89] "metrics-server-85b7d694d7-xs4wl" [211a4016-77bf-43b3-8765-24567cae6b45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 19:07:22.615754   18652 system_pods.go:89] "nvidia-device-plugin-daemonset-v667z" [444c689e-7ffe-4f0d-8b96-34c161bc1ef5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1201 19:07:22.615762   18652 system_pods.go:89] "registry-6b586f9694-g722r" [aab1ac21-3d9b-432a-9c79-77419a1e6c3e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 19:07:22.615769   18652 system_pods.go:89] "registry-creds-764b6fb674-sqhck" [f6be056a-d2f0-4bd2-a225-0755fd0d6439] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 19:07:22.615776   18652 system_pods.go:89] "registry-proxy-q7742" [f6fe9017-d264-4a76-a4d4-9947815e6804] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 19:07:22.615781   18652 system_pods.go:89] "snapshot-controller-7d9fbc56b8-977vf" [650a675e-0f4d-4749-9455-36a2f0b18162] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 19:07:22.615786   18652 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9ghp7" [a06ad8ab-2315-4eb7-8ca7-9e9838ceb101] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 19:07:22.615790   18652 system_pods.go:89] "storage-provisioner" [ab094890-359e-4017-b2e7-33117da16c40] Running
	I1201 19:07:22.615797   18652 system_pods.go:126] duration metric: took 540.6839ms to wait for k8s-apps to be running ...
	I1201 19:07:22.615805   18652 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 19:07:22.615844   18652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:07:22.630138   18652 system_svc.go:56] duration metric: took 14.310923ms WaitForService to wait for kubelet
	I1201 19:07:22.630171   18652 kubeadm.go:587] duration metric: took 42.620102274s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 19:07:22.630191   18652 node_conditions.go:102] verifying NodePressure condition ...
	I1201 19:07:22.632688   18652 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 19:07:22.632710   18652 node_conditions.go:123] node cpu capacity is 8
	I1201 19:07:22.632725   18652 node_conditions.go:105] duration metric: took 2.529514ms to run NodePressure ...
	I1201 19:07:22.632737   18652 start.go:242] waiting for startup goroutines ...
	I1201 19:07:22.825644   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:23.014028   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:23.076521   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:23.076596   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:23.325253   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:23.514507   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:23.577032   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:23.577241   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:23.825628   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:24.014836   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:24.076580   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:24.076605   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:24.325661   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:24.513380   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:24.577712   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:24.577722   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:24.825074   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:25.014219   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:25.076746   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:25.076760   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:25.325436   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:25.514198   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:25.614147   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:25.614160   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:25.826648   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:26.012855   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:26.076303   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:26.076420   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:26.327592   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:26.514376   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:26.614951   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:26.615078   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:26.826122   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:27.016083   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:27.077102   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:27.077212   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:27.325936   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:27.512987   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:27.576613   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:27.576629   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:27.825164   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:28.014705   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:28.076454   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:28.076491   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:28.325321   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:28.513492   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:28.577396   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:28.577651   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:28.825381   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:29.014800   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:29.076816   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:29.077497   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:29.325226   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:29.512725   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:29.576822   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:29.576944   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:29.826549   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:30.015353   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:30.077414   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:30.077475   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:30.325425   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:30.696498   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:30.696572   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:30.696605   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:30.825894   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:31.014363   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:31.077395   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:31.077395   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:31.325716   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:31.513535   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:31.577668   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:31.577681   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:31.825175   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:32.015498   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:32.077316   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:32.077418   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:32.326063   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:32.513496   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:32.615137   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:32.615421   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:32.825986   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:33.015684   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:33.076431   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:33.076469   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:33.325253   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:33.513643   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:33.576880   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:33.577025   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:33.826723   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:34.014684   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:34.077035   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:34.077158   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:34.325885   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:34.513335   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:34.600143   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:34.600161   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:34.826517   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:35.014681   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:35.076561   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:35.076602   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:35.325428   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:35.514653   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:35.577207   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:35.577459   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:35.825674   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:36.015894   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:36.077356   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:36.077495   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:36.325191   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:36.513756   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:36.577842   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:36.577872   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:36.825993   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:37.015275   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:37.077224   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:37.077385   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:37.325003   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:37.529997   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:37.576975   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:37.577045   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:37.826155   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:38.015542   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:38.077134   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:38.077165   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:38.325705   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:38.513042   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:38.613772   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:38.613983   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:38.825536   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:39.013889   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:39.113806   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:39.114024   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:39.326349   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:39.513109   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:39.576853   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:39.577199   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:39.826837   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:40.014348   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:40.076707   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:40.076870   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:40.325432   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:40.513875   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:40.576413   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:40.577645   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:40.825073   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:41.015093   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:41.077138   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:41.077272   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:41.325902   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:41.512724   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:41.577444   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:41.577451   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:41.825573   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:42.013556   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:42.076060   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:42.076237   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:42.325684   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:42.512766   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:42.576218   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:42.576157   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:42.826090   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:43.017504   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:43.076790   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:43.077416   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:43.325867   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:43.596542   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:43.596856   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:43.596953   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:43.825865   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:44.015901   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:44.076508   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:44.076638   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:44.325351   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:44.512473   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:44.577002   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:44.577047   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:44.826200   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:45.014179   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:45.076602   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:45.076623   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:45.325596   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:45.512863   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:45.576768   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:45.576785   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:45.825938   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:46.015778   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:46.115976   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:46.116159   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:46.325528   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:46.512379   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:46.618477   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:07:46.619131   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:46.827336   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:47.017047   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:47.077126   18652 kapi.go:107] duration metric: took 1m5.503746372s to wait for kubernetes.io/minikube-addons=registry ...
	I1201 19:07:47.077213   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:47.325992   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:47.515153   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:47.577281   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:47.826679   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:48.014756   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:48.076516   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:48.326142   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:48.513802   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:48.614376   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:48.825961   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:49.015011   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:49.077341   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:49.325168   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:49.513073   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:49.576654   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:49.825466   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:50.014103   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:50.077045   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:50.325879   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:50.513203   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:50.614658   18652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:50.828549   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:51.015630   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:51.076545   18652 kapi.go:107] duration metric: took 1m9.503145797s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1201 19:07:51.326753   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:51.513068   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:51.826168   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:52.013694   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:52.325533   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:52.512486   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:52.826922   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:53.016479   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:53.326110   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:53.512982   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:53.825770   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:54.013803   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:54.326390   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:54.516172   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:54.825634   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:55.013788   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:55.325382   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:55.513150   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:55.826586   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:56.014515   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:56.325345   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:56.512456   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:56.825419   18652 kapi.go:107] duration metric: took 1m8.503101444s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1201 19:07:56.827680   18652 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-844427 cluster.
	I1201 19:07:56.829168   18652 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1201 19:07:56.830725   18652 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1201 19:07:57.013083   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:57.581441   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:58.014672   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:58.514034   18652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:59.014615   18652 kapi.go:107] duration metric: took 1m17.005211485s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1201 19:07:59.016457   18652 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, amd-gpu-device-plugin, inspektor-gadget, cloud-spanner, registry-creds, storage-provisioner, metrics-server, default-storageclass, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1201 19:07:59.017822   18652 addons.go:530] duration metric: took 1m19.007710614s for enable addons: enabled=[nvidia-device-plugin ingress-dns amd-gpu-device-plugin inspektor-gadget cloud-spanner registry-creds storage-provisioner metrics-server default-storageclass yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1201 19:07:59.017869   18652 start.go:247] waiting for cluster config update ...
	I1201 19:07:59.017892   18652 start.go:256] writing updated cluster config ...
	I1201 19:07:59.018200   18652 ssh_runner.go:195] Run: rm -f paused
	I1201 19:07:59.022112   18652 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 19:07:59.024741   18652 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kt5tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.028530   18652 pod_ready.go:94] pod "coredns-66bc5c9577-kt5tx" is "Ready"
	I1201 19:07:59.028552   18652 pod_ready.go:86] duration metric: took 3.79006ms for pod "coredns-66bc5c9577-kt5tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.030078   18652 pod_ready.go:83] waiting for pod "etcd-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.033025   18652 pod_ready.go:94] pod "etcd-addons-844427" is "Ready"
	I1201 19:07:59.033041   18652 pod_ready.go:86] duration metric: took 2.944917ms for pod "etcd-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.035096   18652 pod_ready.go:83] waiting for pod "kube-apiserver-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.038936   18652 pod_ready.go:94] pod "kube-apiserver-addons-844427" is "Ready"
	I1201 19:07:59.038952   18652 pod_ready.go:86] duration metric: took 3.824909ms for pod "kube-apiserver-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.040541   18652 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.426072   18652 pod_ready.go:94] pod "kube-controller-manager-addons-844427" is "Ready"
	I1201 19:07:59.426100   18652 pod_ready.go:86] duration metric: took 385.541798ms for pod "kube-controller-manager-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:59.626191   18652 pod_ready.go:83] waiting for pod "kube-proxy-7w28c" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:08:00.025673   18652 pod_ready.go:94] pod "kube-proxy-7w28c" is "Ready"
	I1201 19:08:00.025699   18652 pod_ready.go:86] duration metric: took 399.482252ms for pod "kube-proxy-7w28c" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:08:00.225474   18652 pod_ready.go:83] waiting for pod "kube-scheduler-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:08:00.625975   18652 pod_ready.go:94] pod "kube-scheduler-addons-844427" is "Ready"
	I1201 19:08:00.626004   18652 pod_ready.go:86] duration metric: took 400.505594ms for pod "kube-scheduler-addons-844427" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:08:00.626021   18652 pod_ready.go:40] duration metric: took 1.603881774s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 19:08:00.669462   18652 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 19:08:00.671275   18652 out.go:179] * Done! kubectl is now configured to use "addons-844427" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 01 19:07:57 addons-844427 crio[770]: time="2025-12-01T19:07:57.710101542Z" level=info msg="Starting container: ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee" id=c85adf44-7e4b-4145-9033-4824af9f43c8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 19:07:57 addons-844427 crio[770]: time="2025-12-01T19:07:57.712779422Z" level=info msg="Started container" PID=6053 containerID=ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee description=kube-system/csi-hostpathplugin-84njl/csi-snapshotter id=c85adf44-7e4b-4145-9033-4824af9f43c8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=49accf8e633515b7eb0378f90361512b8ba1ae40dbc2ec2d1f290012a132fc43
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.553040901Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9e41bad5-f78b-444f-b6ec-b406b288201f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.553096343Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.559656824Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ec4bca8d0366ba2a0223adf7c4d70621c8b445199fbbc5cb80d54c9358c5d6da UID:75ad87fe-d027-4b9e-8a21-f3d54dae5a67 NetNS:/var/run/netns/f98fb788-019d-400c-889b-ef2f282886cd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00002c838}] Aliases:map[]}"
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.559720532Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.570688924Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ec4bca8d0366ba2a0223adf7c4d70621c8b445199fbbc5cb80d54c9358c5d6da UID:75ad87fe-d027-4b9e-8a21-f3d54dae5a67 NetNS:/var/run/netns/f98fb788-019d-400c-889b-ef2f282886cd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00002c838}] Aliases:map[]}"
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.570803099Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.57167694Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.57244924Z" level=info msg="Ran pod sandbox ec4bca8d0366ba2a0223adf7c4d70621c8b445199fbbc5cb80d54c9358c5d6da with infra container: default/busybox/POD" id=9e41bad5-f78b-444f-b6ec-b406b288201f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.573648921Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e01b13e4-fd93-466c-8cd8-e64a03f31c56 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.57378062Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e01b13e4-fd93-466c-8cd8-e64a03f31c56 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.573826656Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e01b13e4-fd93-466c-8cd8-e64a03f31c56 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.574478126Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1a6f76d2-311d-4ba1-bc3a-fa2e3b163536 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:08:01 addons-844427 crio[770]: time="2025-12-01T19:08:01.575957953Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 01 19:08:02 addons-844427 crio[770]: time="2025-12-01T19:08:02.909707746Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1a6f76d2-311d-4ba1-bc3a-fa2e3b163536 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:08:02 addons-844427 crio[770]: time="2025-12-01T19:08:02.910371777Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d9c494c3-e8ef-4816-a2ee-d55821cae004 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 19:08:02 addons-844427 crio[770]: time="2025-12-01T19:08:02.911715644Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8b22a864-cc45-44f7-bc20-391a63a699cc name=/runtime.v1.ImageService/ImageStatus
	Dec 01 19:08:02 addons-844427 crio[770]: time="2025-12-01T19:08:02.915271796Z" level=info msg="Creating container: default/busybox/busybox" id=a947264f-02d8-4f72-84c2-a31de372567a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 19:08:02 addons-844427 crio[770]: time="2025-12-01T19:08:02.915395898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 19:08:02 addons-844427 crio[770]: time="2025-12-01T19:08:02.920593167Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 19:08:02 addons-844427 crio[770]: time="2025-12-01T19:08:02.921205893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 19:08:02 addons-844427 crio[770]: time="2025-12-01T19:08:02.949963007Z" level=info msg="Created container fb90e673f579f5fbffc3e0152d8b293ee886456575909cd66f7f842b0eb70408: default/busybox/busybox" id=a947264f-02d8-4f72-84c2-a31de372567a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 19:08:02 addons-844427 crio[770]: time="2025-12-01T19:08:02.950578139Z" level=info msg="Starting container: fb90e673f579f5fbffc3e0152d8b293ee886456575909cd66f7f842b0eb70408" id=f3633d07-f24f-484b-ba55-ff1a770c810a name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 19:08:02 addons-844427 crio[770]: time="2025-12-01T19:08:02.952265337Z" level=info msg="Started container" PID=6174 containerID=fb90e673f579f5fbffc3e0152d8b293ee886456575909cd66f7f842b0eb70408 description=default/busybox/busybox id=f3633d07-f24f-484b-ba55-ff1a770c810a name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec4bca8d0366ba2a0223adf7c4d70621c8b445199fbbc5cb80d54c9358c5d6da
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	fb90e673f579f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          9 seconds ago        Running             busybox                                  0                   ec4bca8d0366b       busybox                                    default
	ea87d05f6e32f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          14 seconds ago       Running             csi-snapshotter                          0                   49accf8e63351       csi-hostpathplugin-84njl                   kube-system
	ce685fdd387b8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 15 seconds ago       Running             gcp-auth                                 0                   ed8504c79dc5f       gcp-auth-78565c9fb4-67cg8                  gcp-auth
	9a5fa01966568       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          17 seconds ago       Running             csi-provisioner                          0                   49accf8e63351       csi-hostpathplugin-84njl                   kube-system
	9079f8e7ee755       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            18 seconds ago       Running             liveness-probe                           0                   49accf8e63351       csi-hostpathplugin-84njl                   kube-system
	9ad4ad6057500       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           18 seconds ago       Running             hostpath                                 0                   49accf8e63351       csi-hostpathplugin-84njl                   kube-system
	86bd88fd13749       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            19 seconds ago       Running             gadget                                   0                   69ebf86d7c6c0       gadget-vbz2n                               gadget
	ca14100103757       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                21 seconds ago       Running             node-driver-registrar                    0                   49accf8e63351       csi-hostpathplugin-84njl                   kube-system
	a70e8f8cd00b4       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             22 seconds ago       Running             controller                               0                   4fbbb364a4e0f       ingress-nginx-controller-6c8bf45fb-4rcgb   ingress-nginx
	f6fc7935fddb5       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              25 seconds ago       Running             registry-proxy                           0                   05b9091cf35ad       registry-proxy-q7742                       kube-system
	d3bb04d9d3c1d       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     27 seconds ago       Running             nvidia-device-plugin-ctr                 0                   84df82f524e05       nvidia-device-plugin-daemonset-v667z       kube-system
	9f5f39915b7c1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     30 seconds ago       Running             amd-gpu-device-plugin                    0                   7d256bc8602ab       amd-gpu-device-plugin-wbc9c                kube-system
	7c8ad6d89b920       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              31 seconds ago       Running             csi-resizer                              0                   0a5b664c09097       csi-hostpath-resizer-0                     kube-system
	fff64f001bd5f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   32 seconds ago       Running             csi-external-health-monitor-controller   0                   49accf8e63351       csi-hostpathplugin-84njl                   kube-system
	c08210753710d       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             32 seconds ago       Exited              patch                                    1                   c2bcf79be2e9a       gcp-auth-certs-patch-rllp4                 gcp-auth
	bb66d0b0d855b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   32 seconds ago       Exited              create                                   0                   e4811668dcf3c       gcp-auth-certs-create-r7jn4                gcp-auth
	eb1180791d4aa       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      33 seconds ago       Running             volume-snapshot-controller               0                   b1e493ea11dc8       snapshot-controller-7d9fbc56b8-977vf       kube-system
	1a8c85353220f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   33 seconds ago       Exited              patch                                    0                   b719f0f081972       ingress-nginx-admission-patch-znqvm        ingress-nginx
	016dfc96303af       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      33 seconds ago       Running             volume-snapshot-controller               0                   54a81db21ebc5       snapshot-controller-7d9fbc56b8-9ghp7       kube-system
	b8934753229d8       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              34 seconds ago       Running             yakd                                     0                   2630379b2779e       yakd-dashboard-5ff678cb9-dmddq             yakd-dashboard
	f0e51975105af       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   36 seconds ago       Exited              create                                   0                   004e4d36f4dac       ingress-nginx-admission-create-7mpw6       ingress-nginx
	f0949ee283560       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             37 seconds ago       Running             csi-attacher                             0                   008b484423ddd       csi-hostpath-attacher-0                    kube-system
	9c0d148e12238       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             38 seconds ago       Running             local-path-provisioner                   0                   557a9a8fcdc81       local-path-provisioner-648f6765c9-qzbbn    local-path-storage
	38134e01f2871       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        39 seconds ago       Running             metrics-server                           0                   d41da208176d3       metrics-server-85b7d694d7-xs4wl            kube-system
	1b74364792d43       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               40 seconds ago       Running             minikube-ingress-dns                     0                   4a7f73b525029       kube-ingress-dns-minikube                  kube-system
	1a5f66e8aa183       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           46 seconds ago       Running             registry                                 0                   8e4c553f5fff3       registry-6b586f9694-g722r                  kube-system
	33dd26e97b2ff       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               47 seconds ago       Running             cloud-spanner-emulator                   0                   03bb054ab5de4       cloud-spanner-emulator-5bdddb765-wxltm     default
	840acaec38326       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             49 seconds ago       Running             storage-provisioner                      0                   b5f9c56975193       storage-provisioner                        kube-system
	d2bdc76e2c839       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             49 seconds ago       Running             coredns                                  0                   a9d70c3b19c3b       coredns-66bc5c9577-kt5tx                   kube-system
	260635ba17a06       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   dd9f627436c9a       kube-proxy-7w28c                           kube-system
	83e6fdffcf712       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   e9eca24863587       kindnet-p8gkr                              kube-system
	3db6a1c2f5cc4       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   747804815b279       kube-scheduler-addons-844427               kube-system
	08674a3640b68       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   9e50ce607e3ef       kube-apiserver-addons-844427               kube-system
	58571469b8e13       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   ae333106d829b       etcd-addons-844427                         kube-system
	e6177f5ff208e       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   122ff4d46e5c8       kube-controller-manager-addons-844427      kube-system
	
	
	==> coredns [d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced] <==
	[INFO] 10.244.0.13:41546 - 56315 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000160666s
	[INFO] 10.244.0.13:37141 - 14699 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111014s
	[INFO] 10.244.0.13:37141 - 14962 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00015619s
	[INFO] 10.244.0.13:59703 - 10739 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000076555s
	[INFO] 10.244.0.13:59703 - 10442 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.00009514s
	[INFO] 10.244.0.13:35361 - 33751 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000065078s
	[INFO] 10.244.0.13:35361 - 33980 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000088699s
	[INFO] 10.244.0.13:58333 - 23288 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000077867s
	[INFO] 10.244.0.13:58333 - 22789 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000121429s
	[INFO] 10.244.0.13:38876 - 25060 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000119543s
	[INFO] 10.244.0.13:38876 - 24590 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126288s
	[INFO] 10.244.0.22:60412 - 45845 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000190744s
	[INFO] 10.244.0.22:35829 - 44859 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000243648s
	[INFO] 10.244.0.22:52183 - 31775 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164274s
	[INFO] 10.244.0.22:50260 - 37366 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000202318s
	[INFO] 10.244.0.22:46402 - 25533 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110396s
	[INFO] 10.244.0.22:54272 - 41620 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129495s
	[INFO] 10.244.0.22:48560 - 15298 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005282148s
	[INFO] 10.244.0.22:34168 - 55997 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00732326s
	[INFO] 10.244.0.22:43134 - 10486 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005522831s
	[INFO] 10.244.0.22:43871 - 62542 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007501145s
	[INFO] 10.244.0.22:33089 - 17800 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007585467s
	[INFO] 10.244.0.22:43899 - 65332 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007696369s
	[INFO] 10.244.0.22:59405 - 51136 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000836097s
	[INFO] 10.244.0.22:44400 - 39171 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001196545s
	
	
	==> describe nodes <==
	Name:               addons-844427
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-844427
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=addons-844427
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T19_06_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-844427
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-844427"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 19:06:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-844427
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 19:08:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 19:08:06 +0000   Mon, 01 Dec 2025 19:06:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 19:08:06 +0000   Mon, 01 Dec 2025 19:06:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 19:08:06 +0000   Mon, 01 Dec 2025 19:06:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 19:08:06 +0000   Mon, 01 Dec 2025 19:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-844427
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                0cb0da7c-3b5c-4a34-a77d-6a324b2594f4
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-wxltm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  gadget                      gadget-vbz2n                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  gcp-auth                    gcp-auth-78565c9fb4-67cg8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-4rcgb    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         91s
	  kube-system                 amd-gpu-device-plugin-wbc9c                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 coredns-66bc5c9577-kt5tx                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     92s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 csi-hostpathplugin-84njl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 etcd-addons-844427                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         98s
	  kube-system                 kindnet-p8gkr                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      93s
	  kube-system                 kube-apiserver-addons-844427                250m (3%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-addons-844427       200m (2%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-7w28c                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-addons-844427                100m (1%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 metrics-server-85b7d694d7-xs4wl             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         91s
	  kube-system                 nvidia-device-plugin-daemonset-v667z        0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 registry-6b586f9694-g722r                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 registry-creds-764b6fb674-sqhck             0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 registry-proxy-q7742                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 snapshot-controller-7d9fbc56b8-977vf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 snapshot-controller-7d9fbc56b8-9ghp7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  local-path-storage          local-path-provisioner-648f6765c9-qzbbn     0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-dmddq              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  102s (x8 over 103s)  kubelet          Node addons-844427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 103s)  kubelet          Node addons-844427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x8 over 103s)  kubelet          Node addons-844427 status is now: NodeHasSufficientPID
	  Normal  Starting                 98s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s                  kubelet          Node addons-844427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                  kubelet          Node addons-844427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s                  kubelet          Node addons-844427 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           93s                  node-controller  Node addons-844427 event: Registered Node addons-844427 in Controller
	  Normal  NodeReady                51s                  kubelet          Node addons-844427 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 1 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.401136] i8042: Warning: Keylock active
	[  +0.010565] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494180] block sda: the capability attribute has been deprecated.
	[  +0.091158] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023654] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.003803] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f] <==
	{"level":"warn","ts":"2025-12-01T19:06:31.691484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.698063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.705482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.712067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.721164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.732218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.739749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.758512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.762413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.770327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.776752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:31.831160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:42.430156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:06:42.436693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:07:09.224642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:07:09.231254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:07:09.248085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:07:09.254466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:07:30.694577Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.473028ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-01T19:07:30.694575Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.368422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-01T19:07:30.694674Z","caller":"traceutil/trace.go:172","msg":"trace[1707364333] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1007; }","duration":"182.589675ms","start":"2025-12-01T19:07:30.512069Z","end":"2025-12-01T19:07:30.694659Z","steps":["trace[1707364333] 'range keys from in-memory index tree'  (duration: 182.39346ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:07:30.694687Z","caller":"traceutil/trace.go:172","msg":"trace[646499937] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1007; }","duration":"119.50243ms","start":"2025-12-01T19:07:30.575174Z","end":"2025-12-01T19:07:30.694677Z","steps":["trace[646499937] 'range keys from in-memory index tree'  (duration: 119.286722ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-01T19:07:30.694583Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.398225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-01T19:07:30.694722Z","caller":"traceutil/trace.go:172","msg":"trace[1487492697] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1007; }","duration":"119.538301ms","start":"2025-12-01T19:07:30.575174Z","end":"2025-12-01T19:07:30.694712Z","steps":["trace[1487492697] 'range keys from in-memory index tree'  (duration: 119.324242ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:07:49.747098Z","caller":"traceutil/trace.go:172","msg":"trace[669268451] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"108.048788ms","start":"2025-12-01T19:07:49.639034Z","end":"2025-12-01T19:07:49.747083Z","steps":["trace[669268451] 'process raft request'  (duration: 106.548585ms)"],"step_count":1}
	
	
	==> gcp-auth [ce685fdd387b8219a747fbbc8f9350a09da3aa15276223c93502f01f6831a292] <==
	2025/12/01 19:07:56 GCP Auth Webhook started!
	2025/12/01 19:08:00 Ready to marshal response ...
	2025/12/01 19:08:00 Ready to write response ...
	2025/12/01 19:08:01 Ready to marshal response ...
	2025/12/01 19:08:01 Ready to write response ...
	2025/12/01 19:08:01 Ready to marshal response ...
	2025/12/01 19:08:01 Ready to write response ...
	
	
	==> kernel <==
	 19:08:12 up 50 min,  0 user,  load average: 2.19, 1.04, 0.40
	Linux addons-844427 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6] <==
	I1201 19:06:41.223964       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T19:06:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 19:06:41.465855       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 19:06:41.465882       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 19:06:41.465893       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 19:06:41.466006       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1201 19:07:11.466642       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1201 19:07:11.466651       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1201 19:07:11.466644       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1201 19:07:11.518422       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1201 19:07:12.766350       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 19:07:12.766382       1 metrics.go:72] Registering metrics
	I1201 19:07:12.766439       1 controller.go:711] "Syncing nftables rules"
	I1201 19:07:21.472433       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:07:21.472490       1 main.go:301] handling current node
	I1201 19:07:31.468392       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:07:31.468455       1 main.go:301] handling current node
	I1201 19:07:41.465319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:07:41.465353       1 main.go:301] handling current node
	I1201 19:07:51.465155       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:07:51.465217       1 main.go:301] handling current node
	I1201 19:08:01.465687       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:08:01.465720       1 main.go:301] handling current node
	I1201 19:08:11.464916       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:08:11.464947       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39] <==
	W1201 19:07:34.611462       1 handler_proxy.go:99] no RequestInfo found in the context
	E1201 19:07:34.611557       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.158.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.158.23:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.158.23:443: connect: connection refused" logger="UnhandledError"
	E1201 19:07:34.611581       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1201 19:07:34.611928       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.158.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.158.23:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.158.23:443: connect: connection refused" logger="UnhandledError"
	W1201 19:07:35.613572       1 handler_proxy.go:99] no RequestInfo found in the context
	W1201 19:07:35.613601       1 handler_proxy.go:99] no RequestInfo found in the context
	E1201 19:07:35.613639       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1201 19:07:35.613655       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1201 19:07:35.613656       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1201 19:07:35.614798       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1201 19:07:39.623365       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.158.23:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.158.23:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	W1201 19:07:39.623378       1 handler_proxy.go:99] no RequestInfo found in the context
	E1201 19:07:39.623459       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1201 19:07:39.633252       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1201 19:08:10.384367       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33602: use of closed network connection
	E1201 19:08:10.526877       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33626: use of closed network connection
	
	
	==> kube-controller-manager [e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6] <==
	I1201 19:06:39.211575       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1201 19:06:39.212314       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1201 19:06:39.212454       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1201 19:06:39.212477       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1201 19:06:39.214900       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1201 19:06:39.214922       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 19:06:39.214935       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 19:06:39.214972       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1201 19:06:39.215038       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 19:06:39.215095       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 19:06:39.215105       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 19:06:39.215112       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 19:06:39.220939       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-844427" podCIDRs=["10.244.0.0/24"]
	I1201 19:06:39.220963       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1201 19:06:39.233162       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1201 19:07:09.218963       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1201 19:07:09.219101       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1201 19:07:09.219137       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1201 19:07:09.239654       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1201 19:07:09.242961       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1201 19:07:09.319797       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 19:07:09.343303       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 19:07:24.214577       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1201 19:07:39.324905       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1201 19:07:39.350414       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5] <==
	I1201 19:06:41.226219       1 server_linux.go:53] "Using iptables proxy"
	I1201 19:06:41.347808       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 19:06:41.448188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 19:06:41.448231       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1201 19:06:41.448344       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 19:06:41.479863       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 19:06:41.479906       1 server_linux.go:132] "Using iptables Proxier"
	I1201 19:06:41.485752       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 19:06:41.491114       1 server.go:527] "Version info" version="v1.34.2"
	I1201 19:06:41.491373       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 19:06:41.493614       1 config.go:106] "Starting endpoint slice config controller"
	I1201 19:06:41.493648       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 19:06:41.493748       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 19:06:41.493781       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 19:06:41.493816       1 config.go:200] "Starting service config controller"
	I1201 19:06:41.493827       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 19:06:41.494019       1 config.go:309] "Starting node config controller"
	I1201 19:06:41.494026       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 19:06:41.594242       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 19:06:41.594272       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 19:06:41.594317       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 19:06:41.594325       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750] <==
	E1201 19:06:32.229517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 19:06:32.229750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 19:06:32.229850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 19:06:32.229897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 19:06:32.229904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 19:06:32.229899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 19:06:32.229959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 19:06:32.230084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 19:06:32.230107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1201 19:06:32.230174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 19:06:32.229891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 19:06:32.230176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 19:06:32.230191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1201 19:06:32.230220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 19:06:32.230281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 19:06:32.230378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 19:06:32.230379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 19:06:33.130753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 19:06:33.170903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 19:06:33.189107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 19:06:33.192118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 19:06:33.339162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 19:06:33.379187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 19:06:33.445795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1201 19:06:33.726762       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 19:07:41 addons-844427 kubelet[1291]: I1201 19:07:41.691006    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f77f6b-2f98-4e3e-9a9c-da0465229f80-kube-api-access-dg6tc" (OuterVolumeSpecName: "kube-api-access-dg6tc") pod "b4f77f6b-2f98-4e3e-9a9c-da0465229f80" (UID: "b4f77f6b-2f98-4e3e-9a9c-da0465229f80"). InnerVolumeSpecName "kube-api-access-dg6tc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 01 19:07:41 addons-844427 kubelet[1291]: I1201 19:07:41.789849    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dg6tc\" (UniqueName: \"kubernetes.io/projected/b4f77f6b-2f98-4e3e-9a9c-da0465229f80-kube-api-access-dg6tc\") on node \"addons-844427\" DevicePath \"\""
	Dec 01 19:07:42 addons-844427 kubelet[1291]: I1201 19:07:42.588791    1291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2bcf79be2e9a83415a8c4e0d62eaa2ab342f3213a5c9a332686b1ac55a0b21c"
	Dec 01 19:07:42 addons-844427 kubelet[1291]: I1201 19:07:42.590617    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wbc9c" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 19:07:42 addons-844427 kubelet[1291]: I1201 19:07:42.602725    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-wbc9c" podStartSLOduration=2.426864496 podStartE2EDuration="21.602700112s" podCreationTimestamp="2025-12-01 19:07:21 +0000 UTC" firstStartedPulling="2025-12-01 19:07:22.358382446 +0000 UTC m=+48.077191313" lastFinishedPulling="2025-12-01 19:07:41.534218066 +0000 UTC m=+67.253026929" observedRunningTime="2025-12-01 19:07:42.6026138 +0000 UTC m=+68.321422684" watchObservedRunningTime="2025-12-01 19:07:42.602700112 +0000 UTC m=+68.321508987"
	Dec 01 19:07:43 addons-844427 kubelet[1291]: I1201 19:07:43.593345    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wbc9c" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 19:07:45 addons-844427 kubelet[1291]: I1201 19:07:45.602462    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-v667z" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 19:07:45 addons-844427 kubelet[1291]: I1201 19:07:45.616708    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-v667z" podStartSLOduration=2.370368983 podStartE2EDuration="24.616686088s" podCreationTimestamp="2025-12-01 19:07:21 +0000 UTC" firstStartedPulling="2025-12-01 19:07:22.358795894 +0000 UTC m=+48.077604763" lastFinishedPulling="2025-12-01 19:07:44.605113002 +0000 UTC m=+70.323921868" observedRunningTime="2025-12-01 19:07:45.616153238 +0000 UTC m=+71.334962132" watchObservedRunningTime="2025-12-01 19:07:45.616686088 +0000 UTC m=+71.335494975"
	Dec 01 19:07:46 addons-844427 kubelet[1291]: I1201 19:07:46.614545    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-v667z" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 19:07:46 addons-844427 kubelet[1291]: I1201 19:07:46.615206    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q7742" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 19:07:47 addons-844427 kubelet[1291]: I1201 19:07:47.618582    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q7742" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 19:07:49 addons-844427 kubelet[1291]: I1201 19:07:49.636102    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-q7742" podStartSLOduration=4.717044162 podStartE2EDuration="28.636082195s" podCreationTimestamp="2025-12-01 19:07:21 +0000 UTC" firstStartedPulling="2025-12-01 19:07:22.413675618 +0000 UTC m=+48.132484488" lastFinishedPulling="2025-12-01 19:07:46.332713653 +0000 UTC m=+72.051522521" observedRunningTime="2025-12-01 19:07:46.636177677 +0000 UTC m=+72.354986573" watchObservedRunningTime="2025-12-01 19:07:49.636082195 +0000 UTC m=+75.354891078"
	Dec 01 19:07:50 addons-844427 kubelet[1291]: I1201 19:07:50.643859    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-4rcgb" podStartSLOduration=57.693043305 podStartE2EDuration="1m9.643839148s" podCreationTimestamp="2025-12-01 19:06:41 +0000 UTC" firstStartedPulling="2025-12-01 19:07:37.87945121 +0000 UTC m=+63.598260079" lastFinishedPulling="2025-12-01 19:07:49.830247028 +0000 UTC m=+75.549055922" observedRunningTime="2025-12-01 19:07:50.643730599 +0000 UTC m=+76.362539483" watchObservedRunningTime="2025-12-01 19:07:50.643839148 +0000 UTC m=+76.362648033"
	Dec 01 19:07:52 addons-844427 kubelet[1291]: I1201 19:07:52.657003    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-vbz2n" podStartSLOduration=64.365602499 podStartE2EDuration="1m11.656985819s" podCreationTimestamp="2025-12-01 19:06:41 +0000 UTC" firstStartedPulling="2025-12-01 19:07:45.284915019 +0000 UTC m=+71.003723881" lastFinishedPulling="2025-12-01 19:07:52.576298332 +0000 UTC m=+78.295107201" observedRunningTime="2025-12-01 19:07:52.656573414 +0000 UTC m=+78.375382299" watchObservedRunningTime="2025-12-01 19:07:52.656985819 +0000 UTC m=+78.375794730"
	Dec 01 19:07:53 addons-844427 kubelet[1291]: E1201 19:07:53.792922    1291 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 01 19:07:53 addons-844427 kubelet[1291]: E1201 19:07:53.793000    1291 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6be056a-d2f0-4bd2-a225-0755fd0d6439-gcr-creds podName:f6be056a-d2f0-4bd2-a225-0755fd0d6439 nodeName:}" failed. No retries permitted until 2025-12-01 19:08:25.792985329 +0000 UTC m=+111.511794205 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/f6be056a-d2f0-4bd2-a225-0755fd0d6439-gcr-creds") pod "registry-creds-764b6fb674-sqhck" (UID: "f6be056a-d2f0-4bd2-a225-0755fd0d6439") : secret "registry-creds-gcr" not found
	Dec 01 19:07:54 addons-844427 kubelet[1291]: I1201 19:07:54.409969    1291 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 01 19:07:54 addons-844427 kubelet[1291]: I1201 19:07:54.410014    1291 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 01 19:07:56 addons-844427 kubelet[1291]: I1201 19:07:56.678710    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-67cg8" podStartSLOduration=66.2694877 podStartE2EDuration="1m8.678689953s" podCreationTimestamp="2025-12-01 19:06:48 +0000 UTC" firstStartedPulling="2025-12-01 19:07:54.079885862 +0000 UTC m=+79.798694728" lastFinishedPulling="2025-12-01 19:07:56.489088097 +0000 UTC m=+82.207896981" observedRunningTime="2025-12-01 19:07:56.677001291 +0000 UTC m=+82.395810175" watchObservedRunningTime="2025-12-01 19:07:56.678689953 +0000 UTC m=+82.397498837"
	Dec 01 19:07:58 addons-844427 kubelet[1291]: I1201 19:07:58.691000    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-84njl" podStartSLOduration=2.369466661 podStartE2EDuration="37.690979341s" podCreationTimestamp="2025-12-01 19:07:21 +0000 UTC" firstStartedPulling="2025-12-01 19:07:22.347960373 +0000 UTC m=+48.066769240" lastFinishedPulling="2025-12-01 19:07:57.669473044 +0000 UTC m=+83.388281920" observedRunningTime="2025-12-01 19:07:58.690383135 +0000 UTC m=+84.409192019" watchObservedRunningTime="2025-12-01 19:07:58.690979341 +0000 UTC m=+84.409788406"
	Dec 01 19:08:01 addons-844427 kubelet[1291]: I1201 19:08:01.346718    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws8d9\" (UniqueName: \"kubernetes.io/projected/75ad87fe-d027-4b9e-8a21-f3d54dae5a67-kube-api-access-ws8d9\") pod \"busybox\" (UID: \"75ad87fe-d027-4b9e-8a21-f3d54dae5a67\") " pod="default/busybox"
	Dec 01 19:08:01 addons-844427 kubelet[1291]: I1201 19:08:01.346812    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/75ad87fe-d027-4b9e-8a21-f3d54dae5a67-gcp-creds\") pod \"busybox\" (UID: \"75ad87fe-d027-4b9e-8a21-f3d54dae5a67\") " pod="default/busybox"
	Dec 01 19:08:03 addons-844427 kubelet[1291]: I1201 19:08:03.718175    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.381101575 podStartE2EDuration="2.718153658s" podCreationTimestamp="2025-12-01 19:08:01 +0000 UTC" firstStartedPulling="2025-12-01 19:08:01.574070845 +0000 UTC m=+87.292879712" lastFinishedPulling="2025-12-01 19:08:02.911122931 +0000 UTC m=+88.629931795" observedRunningTime="2025-12-01 19:08:03.718050933 +0000 UTC m=+89.436859823" watchObservedRunningTime="2025-12-01 19:08:03.718153658 +0000 UTC m=+89.436962541"
	Dec 01 19:08:12 addons-844427 kubelet[1291]: I1201 19:08:12.364868    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bbde5bb-91cd-4e9f-82ce-8bcfbddfae04" path="/var/lib/kubelet/pods/2bbde5bb-91cd-4e9f-82ce-8bcfbddfae04/volumes"
	Dec 01 19:08:12 addons-844427 kubelet[1291]: I1201 19:08:12.365620    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4f77f6b-2f98-4e3e-9a9c-da0465229f80" path="/var/lib/kubelet/pods/b4f77f6b-2f98-4e3e-9a9c-da0465229f80/volumes"
	
	
	==> storage-provisioner [840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30] <==
	W1201 19:07:46.515199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:07:48.517774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:07:48.521864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:07:50.524413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:07:50.529102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:07:52.532396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:07:52.537258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:07:54.540829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:07:54.546209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:07:56.549603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:07:56.552978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:07:58.555824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:07:58.559624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:08:00.562455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:08:00.566173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:08:02.568938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:08:02.572726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:08:04.575916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:08:04.581117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:08:06.583981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:08:06.587279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:08:08.589726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:08:08.594981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:08:10.598641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:08:10.605157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-844427 -n addons-844427
helpers_test.go:269: (dbg) Run:  kubectl --context addons-844427 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-7mpw6 ingress-nginx-admission-patch-znqvm registry-creds-764b6fb674-sqhck
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-844427 describe pod ingress-nginx-admission-create-7mpw6 ingress-nginx-admission-patch-znqvm registry-creds-764b6fb674-sqhck
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-844427 describe pod ingress-nginx-admission-create-7mpw6 ingress-nginx-admission-patch-znqvm registry-creds-764b6fb674-sqhck: exit status 1 (62.350659ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7mpw6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-znqvm" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-sqhck" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-844427 describe pod ingress-nginx-admission-create-7mpw6 ingress-nginx-admission-patch-znqvm registry-creds-764b6fb674-sqhck: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable headlamp --alsologtostderr -v=1: exit status 11 (259.149341ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:08:13.095954   27562 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:08:13.096388   27562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:13.096403   27562 out.go:374] Setting ErrFile to fd 2...
	I1201 19:08:13.096409   27562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:13.096704   27562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:08:13.097080   27562 mustload.go:66] Loading cluster: addons-844427
	I1201 19:08:13.097580   27562 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:13.097616   27562 addons.go:622] checking whether the cluster is paused
	I1201 19:08:13.097766   27562 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:13.097788   27562 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:08:13.098458   27562 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:08:13.116815   27562 ssh_runner.go:195] Run: systemctl --version
	I1201 19:08:13.116880   27562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:08:13.134850   27562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:08:13.231839   27562 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:08:13.231908   27562 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:08:13.260149   27562 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:08:13.260170   27562 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:08:13.260183   27562 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:08:13.260189   27562 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:08:13.260192   27562 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:08:13.260197   27562 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:08:13.260202   27562 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:08:13.260206   27562 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:08:13.260210   27562 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:08:13.260217   27562 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:08:13.260221   27562 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:08:13.260227   27562 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:08:13.260232   27562 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:08:13.260238   27562 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:08:13.260243   27562 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:08:13.260253   27562 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:08:13.260259   27562 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:08:13.260264   27562 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:08:13.260268   27562 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:08:13.260272   27562 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:08:13.260276   27562 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:08:13.260280   27562 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:08:13.260306   27562 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:08:13.260312   27562 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:08:13.260321   27562 cri.go:89] found id: ""
	I1201 19:08:13.260366   27562 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:08:13.274255   27562 out.go:203] 
	W1201 19:08:13.275934   27562 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:08:13.275961   27562 out.go:285] * 
	* 
	W1201 19:08:13.281270   27562 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:08:13.282635   27562 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-wxltm" [85361118-dfc2-4f86-b7b8-4c2353bfbf53] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003081816s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (256.437002ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:08:21.104867   28179 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:08:21.105129   28179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:21.105137   28179 out.go:374] Setting ErrFile to fd 2...
	I1201 19:08:21.105141   28179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:21.105332   28179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:08:21.105575   28179 mustload.go:66] Loading cluster: addons-844427
	I1201 19:08:21.105921   28179 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:21.105939   28179 addons.go:622] checking whether the cluster is paused
	I1201 19:08:21.106063   28179 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:21.106081   28179 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:08:21.106629   28179 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:08:21.125861   28179 ssh_runner.go:195] Run: systemctl --version
	I1201 19:08:21.125909   28179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:08:21.146224   28179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:08:21.246910   28179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:08:21.246988   28179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:08:21.276558   28179 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:08:21.276580   28179 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:08:21.276585   28179 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:08:21.276590   28179 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:08:21.276594   28179 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:08:21.276601   28179 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:08:21.276606   28179 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:08:21.276610   28179 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:08:21.276615   28179 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:08:21.276623   28179 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:08:21.276644   28179 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:08:21.276652   28179 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:08:21.276657   28179 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:08:21.276662   28179 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:08:21.276666   28179 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:08:21.276681   28179 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:08:21.276684   28179 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:08:21.276687   28179 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:08:21.276690   28179 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:08:21.276692   28179 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:08:21.276695   28179 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:08:21.276698   28179 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:08:21.276700   28179 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:08:21.276703   28179 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:08:21.276706   28179 cri.go:89] found id: ""
	I1201 19:08:21.276748   28179 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:08:21.291427   28179 out.go:203] 
	W1201 19:08:21.292616   28179 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:08:21.292641   28179 out.go:285] * 
	* 
	W1201 19:08:21.295757   28179 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:08:21.296962   28179 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-844427 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-844427 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-844427 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [205f165e-b37b-446f-bbd2-f0bcb44c66aa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [205f165e-b37b-446f-bbd2-f0bcb44c66aa] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [205f165e-b37b-446f-bbd2-f0bcb44c66aa] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003096504s
addons_test.go:967: (dbg) Run:  kubectl --context addons-844427 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 ssh "cat /opt/local-path-provisioner/pvc-151ebd6f-1249-4e0a-b7bb-e835b33c9271_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-844427 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-844427 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (257.029748ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:08:21.193258   28239 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:08:21.193557   28239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:21.193566   28239 out.go:374] Setting ErrFile to fd 2...
	I1201 19:08:21.193570   28239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:21.193783   28239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:08:21.194057   28239 mustload.go:66] Loading cluster: addons-844427
	I1201 19:08:21.194478   28239 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:21.194502   28239 addons.go:622] checking whether the cluster is paused
	I1201 19:08:21.194627   28239 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:21.194651   28239 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:08:21.195145   28239 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:08:21.215656   28239 ssh_runner.go:195] Run: systemctl --version
	I1201 19:08:21.215706   28239 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:08:21.233972   28239 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:08:21.334969   28239 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:08:21.335038   28239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:08:21.365466   28239 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:08:21.365498   28239 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:08:21.365504   28239 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:08:21.365509   28239 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:08:21.365513   28239 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:08:21.365519   28239 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:08:21.365523   28239 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:08:21.365527   28239 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:08:21.365532   28239 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:08:21.365587   28239 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:08:21.365593   28239 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:08:21.365597   28239 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:08:21.365601   28239 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:08:21.365605   28239 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:08:21.365610   28239 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:08:21.365620   28239 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:08:21.365625   28239 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:08:21.365631   28239 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:08:21.365635   28239 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:08:21.365639   28239 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:08:21.365643   28239 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:08:21.365647   28239 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:08:21.365652   28239 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:08:21.365656   28239 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:08:21.365660   28239 cri.go:89] found id: ""
	I1201 19:08:21.365716   28239 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:08:21.382633   28239 out.go:203] 
	W1201 19:08:21.383985   28239 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:08:21.384009   28239 out.go:285] * 
	* 
	W1201 19:08:21.387731   28239 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:08:21.389134   28239 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-v667z" [444c689e-7ffe-4f0d-8b96-34c161bc1ef5] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00316409s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (254.652559ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:08:15.842909   27785 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:08:15.843172   27785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:15.843182   27785 out.go:374] Setting ErrFile to fd 2...
	I1201 19:08:15.843186   27785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:15.843399   27785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:08:15.843625   27785 mustload.go:66] Loading cluster: addons-844427
	I1201 19:08:15.843908   27785 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:15.843928   27785 addons.go:622] checking whether the cluster is paused
	I1201 19:08:15.844005   27785 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:15.844019   27785 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:08:15.844517   27785 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:08:15.864207   27785 ssh_runner.go:195] Run: systemctl --version
	I1201 19:08:15.864267   27785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:08:15.883038   27785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:08:15.981344   27785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:08:15.981426   27785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:08:16.011884   27785 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:08:16.011905   27785 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:08:16.011911   27785 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:08:16.011917   27785 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:08:16.011922   27785 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:08:16.011927   27785 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:08:16.011931   27785 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:08:16.011938   27785 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:08:16.011947   27785 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:08:16.011955   27785 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:08:16.011963   27785 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:08:16.011967   27785 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:08:16.011976   27785 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:08:16.011981   27785 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:08:16.011989   27785 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:08:16.012011   27785 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:08:16.012021   27785 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:08:16.012026   27785 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:08:16.012030   27785 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:08:16.012034   27785 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:08:16.012039   27785 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:08:16.012043   27785 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:08:16.012047   27785 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:08:16.012051   27785 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:08:16.012056   27785 cri.go:89] found id: ""
	I1201 19:08:16.012127   27785 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:08:16.026062   27785 out.go:203] 
	W1201 19:08:16.027393   27785 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:08:16.027411   27785 out.go:285] * 
	* 
	W1201 19:08:16.030400   27785 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:08:16.032364   27785 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-dmddq" [d95b9a05-ceba-4741-a4b2-33741b9bd8bf] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.002762527s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable yakd --alsologtostderr -v=1: exit status 11 (259.723427ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:08:21.105172   28178 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:08:21.105421   28178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:21.105430   28178 out.go:374] Setting ErrFile to fd 2...
	I1201 19:08:21.105435   28178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:21.105617   28178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:08:21.105836   28178 mustload.go:66] Loading cluster: addons-844427
	I1201 19:08:21.106270   28178 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:21.106308   28178 addons.go:622] checking whether the cluster is paused
	I1201 19:08:21.106407   28178 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:21.106422   28178 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:08:21.106746   28178 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:08:21.125484   28178 ssh_runner.go:195] Run: systemctl --version
	I1201 19:08:21.125542   28178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:08:21.145034   28178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:08:21.244786   28178 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:08:21.244880   28178 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:08:21.276618   28178 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:08:21.276643   28178 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:08:21.276653   28178 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:08:21.276659   28178 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:08:21.276664   28178 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:08:21.276670   28178 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:08:21.276682   28178 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:08:21.276687   28178 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:08:21.276691   28178 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:08:21.276704   28178 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:08:21.276709   28178 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:08:21.276713   28178 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:08:21.276718   28178 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:08:21.276722   28178 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:08:21.276727   28178 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:08:21.276741   28178 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:08:21.276752   28178 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:08:21.276758   28178 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:08:21.276765   28178 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:08:21.276773   28178 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:08:21.276778   28178 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:08:21.276782   28178 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:08:21.276790   28178 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:08:21.276794   28178 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:08:21.276799   28178 cri.go:89] found id: ""
	I1201 19:08:21.276870   28178 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:08:21.291423   28178 out.go:203] 
	W1201 19:08:21.292615   28178 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:08:21.292632   28178 out.go:285] * 
	* 
	W1201 19:08:21.295674   28178 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:08:21.296953   28178 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-wbc9c" [6ca4c03d-f88e-406c-b3e8-b6bcfbe29679] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003235208s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-844427 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-844427 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (254.335126ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:08:15.842936   27786 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:08:15.843105   27786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:15.843115   27786 out.go:374] Setting ErrFile to fd 2...
	I1201 19:08:15.843119   27786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:08:15.843341   27786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:08:15.843609   27786 mustload.go:66] Loading cluster: addons-844427
	I1201 19:08:15.843909   27786 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:15.843929   27786 addons.go:622] checking whether the cluster is paused
	I1201 19:08:15.844004   27786 config.go:182] Loaded profile config "addons-844427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:08:15.844020   27786 host.go:66] Checking if "addons-844427" exists ...
	I1201 19:08:15.844397   27786 cli_runner.go:164] Run: docker container inspect addons-844427 --format={{.State.Status}}
	I1201 19:08:15.863576   27786 ssh_runner.go:195] Run: systemctl --version
	I1201 19:08:15.863651   27786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-844427
	I1201 19:08:15.884407   27786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/addons-844427/id_rsa Username:docker}
	I1201 19:08:15.981217   27786 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:08:15.981329   27786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:08:16.010743   27786 cri.go:89] found id: "ea87d05f6e32f7fcd653b7f61484a1e70179756ddd58646263d243509fb15eee"
	I1201 19:08:16.010764   27786 cri.go:89] found id: "9a5fa019665688e2ec0b0d686dd639f5efb3adcacb046b6d12cc835c756841bd"
	I1201 19:08:16.010770   27786 cri.go:89] found id: "9079f8e7ee755d8bb77175412fb823fd90663eaaf5184ccaeebf3897126728cf"
	I1201 19:08:16.010775   27786 cri.go:89] found id: "9ad4ad6057500ca106cefad54b427f51d519b5eeb8d6a1daaa1cf09decf38d40"
	I1201 19:08:16.010779   27786 cri.go:89] found id: "ca1410010375779e56c0b4bd81328f56629820cf29acfa9a5767bb8e3fbcd69c"
	I1201 19:08:16.010784   27786 cri.go:89] found id: "f6fc7935fddb5b69e893dae88b0847168f8982f00e3c4c8eeee5a2a44aea4801"
	I1201 19:08:16.010789   27786 cri.go:89] found id: "d3bb04d9d3c1d7e78f9e5d1ec05f62eda299d15af27004d2702f37f403810f75"
	I1201 19:08:16.010793   27786 cri.go:89] found id: "9f5f39915b7c1e0ef1f6a5b2d03d2c1d018c2891286406619e48fb3e7157b2a5"
	I1201 19:08:16.010798   27786 cri.go:89] found id: "7c8ad6d89b920e63d42c5c6e45e4343e3af45e357e37a4a91e5cd82c4e7bea66"
	I1201 19:08:16.010805   27786 cri.go:89] found id: "fff64f001bd5fc387df31a465416a24b56ea0e124ae568decfc8915f396690b9"
	I1201 19:08:16.010812   27786 cri.go:89] found id: "eb1180791d4aa6d3ccc79c094a9f357e5b8d7685cacf9982e5b63a115c881361"
	I1201 19:08:16.010817   27786 cri.go:89] found id: "016dfc96303afe3d8f92725879732cd1398619bd70cc0be8cf040fa14cc624eb"
	I1201 19:08:16.010824   27786 cri.go:89] found id: "f0949ee283560262f6bd2b62fbedf0aa6dd1b4b111b6dcca7eac75bb51055398"
	I1201 19:08:16.010829   27786 cri.go:89] found id: "38134e01f2871a2e460359e6bbda8669fb8dc791c32cb4a06e8eec6a4bc4bb75"
	I1201 19:08:16.010833   27786 cri.go:89] found id: "1b74364792d4355b2e201c22689395e50bec3ff39ad7f6eb21283afacabcf798"
	I1201 19:08:16.010843   27786 cri.go:89] found id: "1a5f66e8aa1836914786e7280b2107574ea418f6f8e848fb6360279352b7baf1"
	I1201 19:08:16.010850   27786 cri.go:89] found id: "840acaec38326b0a50fdd22ddd204b803bd9c6d6204924992f77cb4d130bef30"
	I1201 19:08:16.010857   27786 cri.go:89] found id: "d2bdc76e2c839af735b9a9dd618479d9695794c6e7be0f6761baf9f031484ced"
	I1201 19:08:16.010862   27786 cri.go:89] found id: "260635ba17a06bf4c92f4863a544e5a43bc481ea37093b8cd333c4f02aca7ba5"
	I1201 19:08:16.010866   27786 cri.go:89] found id: "83e6fdffcf712b57c797c1eb156a22da6cf851499c5df519462f3c021e3d91a6"
	I1201 19:08:16.010873   27786 cri.go:89] found id: "3db6a1c2f5cc4fb1061b4ca07065be4c6b1e192e789c2ecaa13a885537e90750"
	I1201 19:08:16.010876   27786 cri.go:89] found id: "08674a3640b68ed88c7f10defe2fe8c2369cbc24427aaaeb4c714fabe5f7cf39"
	I1201 19:08:16.010881   27786 cri.go:89] found id: "58571469b8e133c82c01bd419582359a17c4ff93cfe3ebd9035c7bf355d4c52f"
	I1201 19:08:16.010887   27786 cri.go:89] found id: "e6177f5ff208e7f8973998b6f676defab83006bcc67c49c4599c4f19f93b15a6"
	I1201 19:08:16.010891   27786 cri.go:89] found id: ""
	I1201 19:08:16.010944   27786 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 19:08:16.026054   27786 out.go:203] 
	W1201 19:08:16.027395   27786 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:08:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 19:08:16.027410   27786 out.go:285] * 
	* 
	W1201 19:08:16.030403   27786 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 19:08:16.032367   27786 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-844427 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-764481 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-764481 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-6grpm" [cd528b94-b6be-45b4-83db-f76e04d3ff0e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-764481 -n functional-764481
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-01 19:24:20.854525672 +0000 UTC m=+1113.642590268
functional_test.go:1645: (dbg) Run:  kubectl --context functional-764481 describe po hello-node-connect-7d85dfc575-6grpm -n default
functional_test.go:1645: (dbg) kubectl --context functional-764481 describe po hello-node-connect-7d85dfc575-6grpm -n default:
Name:             hello-node-connect-7d85dfc575-6grpm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-764481/192.168.49.2
Start Time:       Mon, 01 Dec 2025 19:14:20 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82nxq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-82nxq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6grpm to functional-764481
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 9m59s)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m48s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-764481 logs hello-node-connect-7d85dfc575-6grpm -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-764481 logs hello-node-connect-7d85dfc575-6grpm -n default: exit status 1 (65.627365ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6grpm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-764481 logs hello-node-connect-7d85dfc575-6grpm -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-764481 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-6grpm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-764481/192.168.49.2
Start Time:       Mon, 01 Dec 2025 19:14:20 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82nxq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-82nxq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6grpm to functional-764481
Normal   Pulling    7m3s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m3s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     5m1s (x20 over 10m)   kubelet            Error: ImagePullBackOff
Normal   BackOff    4m49s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-764481 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-764481 logs -l app=hello-node-connect: exit status 1 (62.075668ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6grpm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-764481 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-764481 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.84.154
IPs:                      10.106.84.154
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31097/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-764481
helpers_test.go:243: (dbg) docker inspect functional-764481:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "582e7f1b3214d23f79217cda511f973e55650d24352f466f086d478a25deda38",
	        "Created": "2025-12-01T19:11:56.214375295Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40794,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T19:11:56.247818176Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/582e7f1b3214d23f79217cda511f973e55650d24352f466f086d478a25deda38/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/582e7f1b3214d23f79217cda511f973e55650d24352f466f086d478a25deda38/hostname",
	        "HostsPath": "/var/lib/docker/containers/582e7f1b3214d23f79217cda511f973e55650d24352f466f086d478a25deda38/hosts",
	        "LogPath": "/var/lib/docker/containers/582e7f1b3214d23f79217cda511f973e55650d24352f466f086d478a25deda38/582e7f1b3214d23f79217cda511f973e55650d24352f466f086d478a25deda38-json.log",
	        "Name": "/functional-764481",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-764481:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-764481",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "582e7f1b3214d23f79217cda511f973e55650d24352f466f086d478a25deda38",
	                "LowerDir": "/var/lib/docker/overlay2/c9cd0dcaeacc1cdb5a63413c96d56c060ea4c465e9d24900b62ff09e03e01d71-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9cd0dcaeacc1cdb5a63413c96d56c060ea4c465e9d24900b62ff09e03e01d71/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9cd0dcaeacc1cdb5a63413c96d56c060ea4c465e9d24900b62ff09e03e01d71/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9cd0dcaeacc1cdb5a63413c96d56c060ea4c465e9d24900b62ff09e03e01d71/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-764481",
	                "Source": "/var/lib/docker/volumes/functional-764481/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-764481",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-764481",
	                "name.minikube.sigs.k8s.io": "functional-764481",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "78f0431a73ead0dcf81e9362901d414680269227acb579ef871b2f77bc193d96",
	            "SandboxKey": "/var/run/docker/netns/78f0431a73ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-764481": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0faa025606e023e190d58bdca3dfcfb0187e509a1ca411543008f8bfc2b8872a",
	                    "EndpointID": "364806ac16b2bb09f3a460106e466281b873700e7bc0cbcc5754de6bce8cb8a6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "2a:a2:d3:34:8d:88",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-764481",
	                        "582e7f1b3214"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-764481 -n functional-764481
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-764481 logs -n 25: (1.24878834s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-764481 image load --daemon kicbase/echo-server:functional-764481 --alsologtostderr                                                                   │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image ls                                                                                                                                      │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image load --daemon kicbase/echo-server:functional-764481 --alsologtostderr                                                                   │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image ls                                                                                                                                      │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image load --daemon kicbase/echo-server:functional-764481 --alsologtostderr                                                                   │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image ls                                                                                                                                      │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image save kicbase/echo-server:functional-764481 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image rm kicbase/echo-server:functional-764481 --alsologtostderr                                                                              │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image ls                                                                                                                                      │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image save --daemon kicbase/echo-server:functional-764481 --alsologtostderr                                                                   │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ start          │ -p functional-764481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │                     │
	│ start          │ -p functional-764481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │                     │
	│ start          │ -p functional-764481 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-764481 --alsologtostderr -v=1                                                                                                  │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ update-context │ functional-764481 update-context --alsologtostderr -v=2                                                                                                         │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ update-context │ functional-764481 update-context --alsologtostderr -v=2                                                                                                         │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ update-context │ functional-764481 update-context --alsologtostderr -v=2                                                                                                         │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image ls --format short --alsologtostderr                                                                                                     │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image ls --format yaml --alsologtostderr                                                                                                      │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ ssh            │ functional-764481 ssh pgrep buildkitd                                                                                                                           │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │                     │
	│ image          │ functional-764481 image build -t localhost/my-image:functional-764481 testdata/build --alsologtostderr                                                          │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image ls                                                                                                                                      │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image ls --format json --alsologtostderr                                                                                                      │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	│ image          │ functional-764481 image ls --format table --alsologtostderr                                                                                                     │ functional-764481 │ jenkins │ v1.37.0 │ 01 Dec 25 19:14 UTC │ 01 Dec 25 19:14 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 19:14:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 19:14:30.041641   56188 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:14:30.041753   56188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:14:30.041764   56188 out.go:374] Setting ErrFile to fd 2...
	I1201 19:14:30.041770   56188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:14:30.042054   56188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:14:30.042556   56188 out.go:368] Setting JSON to false
	I1201 19:14:30.043471   56188 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3421,"bootTime":1764613049,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:14:30.043533   56188 start.go:143] virtualization: kvm guest
	I1201 19:14:30.045416   56188 out.go:179] * [functional-764481] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:14:30.046566   56188 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:14:30.046601   56188 notify.go:221] Checking for updates...
	I1201 19:14:30.048878   56188 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:14:30.050315   56188 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:14:30.051600   56188 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 19:14:30.052802   56188 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:14:30.053804   56188 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:14:30.055304   56188 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:14:30.055892   56188 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:14:30.079888   56188 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 19:14:30.079973   56188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:14:30.138004   56188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-01 19:14:30.126608091 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:14:30.138089   56188 docker.go:319] overlay module found
	I1201 19:14:30.139667   56188 out.go:179] * Using the docker driver based on existing profile
	I1201 19:14:30.140899   56188 start.go:309] selected driver: docker
	I1201 19:14:30.140910   56188 start.go:927] validating driver "docker" against &{Name:functional-764481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-764481 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:14:30.140986   56188 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:14:30.141071   56188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:14:30.196138   56188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-01 19:14:30.186661868 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:14:30.196792   56188 cni.go:84] Creating CNI manager for ""
	I1201 19:14:30.196861   56188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 19:14:30.196904   56188 start.go:353] cluster config:
	{Name:functional-764481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-764481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:14:30.198566   56188 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 01 19:14:36 functional-764481 crio[3607]: time="2025-12-01T19:14:36.462089591Z" level=info msg="Starting container: ff95859e4c689030cbe6517fa4f5d8dd653b8b0e514a6167385a313020b7a867" id=d8843477-7678-45fd-8ba2-be2ccf268dfd name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 19:14:36 functional-764481 crio[3607]: time="2025-12-01T19:14:36.463820704Z" level=info msg="Started container" PID=7739 containerID=ff95859e4c689030cbe6517fa4f5d8dd653b8b0e514a6167385a313020b7a867 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ghrzx/kubernetes-dashboard id=d8843477-7678-45fd-8ba2-be2ccf268dfd name=/runtime.v1.RuntimeService/StartContainer sandboxID=b7690d15ea9c2769e4ee9d5d9619e2cb4ae7035785545415ff29db2d966545ac
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.951494862Z" level=info msg="Stopping pod sandbox: b4c8cf63ab0debe3bd828ca93178985b67304b82e3b79972a454693d284a2536" id=c77ff26b-61c8-445c-9b60-61fbaafba962 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.951556683Z" level=info msg="Stopped pod sandbox (already stopped): b4c8cf63ab0debe3bd828ca93178985b67304b82e3b79972a454693d284a2536" id=c77ff26b-61c8-445c-9b60-61fbaafba962 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.951995555Z" level=info msg="Removing pod sandbox: b4c8cf63ab0debe3bd828ca93178985b67304b82e3b79972a454693d284a2536" id=1dc2c952-017e-46c3-b6cd-871a55cc0d81 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.955324982Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.955380562Z" level=info msg="Removed pod sandbox: b4c8cf63ab0debe3bd828ca93178985b67304b82e3b79972a454693d284a2536" id=1dc2c952-017e-46c3-b6cd-871a55cc0d81 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.955820918Z" level=info msg="Stopping pod sandbox: 7f352b08f80d19e0a306b564ddbd5ae6a2b6566be2b87566a0a07cbbd5ca257d" id=6ea64ce4-797b-4f77-9fac-10b1c9dd27eb name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.955871607Z" level=info msg="Stopped pod sandbox (already stopped): 7f352b08f80d19e0a306b564ddbd5ae6a2b6566be2b87566a0a07cbbd5ca257d" id=6ea64ce4-797b-4f77-9fac-10b1c9dd27eb name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.956175418Z" level=info msg="Removing pod sandbox: 7f352b08f80d19e0a306b564ddbd5ae6a2b6566be2b87566a0a07cbbd5ca257d" id=7fbd3d64-9cec-4d41-8c33-40d6e5f94a01 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.958564564Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.958613021Z" level=info msg="Removed pod sandbox: 7f352b08f80d19e0a306b564ddbd5ae6a2b6566be2b87566a0a07cbbd5ca257d" id=7fbd3d64-9cec-4d41-8c33-40d6e5f94a01 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.958973503Z" level=info msg="Stopping pod sandbox: 1b776c0bb82fcd6442fbf65f62eb3c3411179b308aeb8c9b041c7e2cc87b0867" id=811fc8a3-c65b-4fcb-b5bd-b7728a82d279 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.959019884Z" level=info msg="Stopped pod sandbox (already stopped): 1b776c0bb82fcd6442fbf65f62eb3c3411179b308aeb8c9b041c7e2cc87b0867" id=811fc8a3-c65b-4fcb-b5bd-b7728a82d279 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.959387144Z" level=info msg="Removing pod sandbox: 1b776c0bb82fcd6442fbf65f62eb3c3411179b308aeb8c9b041c7e2cc87b0867" id=59c93321-622f-4a23-969a-20e0da8ad47b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.961648663Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 19:14:39 functional-764481 crio[3607]: time="2025-12-01T19:14:39.961714394Z" level=info msg="Removed pod sandbox: 1b776c0bb82fcd6442fbf65f62eb3c3411179b308aeb8c9b041c7e2cc87b0867" id=59c93321-622f-4a23-969a-20e0da8ad47b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 19:15:00 functional-764481 crio[3607]: time="2025-12-01T19:15:00.963230889Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=951083c2-dd03-4a8e-b712-3c4df2f18181 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:15:02 functional-764481 crio[3607]: time="2025-12-01T19:15:02.962762094Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bab0f4ee-adba-4e78-a7f2-3f3393de8bbd name=/runtime.v1.ImageService/PullImage
	Dec 01 19:15:49 functional-764481 crio[3607]: time="2025-12-01T19:15:49.963445294Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d1286686-47dc-4c5a-abb0-16fb2b2b7408 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:15:53 functional-764481 crio[3607]: time="2025-12-01T19:15:53.962955198Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8a7220de-ae1f-4233-9837-dd01c59416b5 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:17:15 functional-764481 crio[3607]: time="2025-12-01T19:17:15.963769988Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c369fd55-a5e4-44e1-b77e-62237e55356e name=/runtime.v1.ImageService/PullImage
	Dec 01 19:17:18 functional-764481 crio[3607]: time="2025-12-01T19:17:18.963611468Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9b71e239-8598-4626-9842-aae19460d5e0 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:20:01 functional-764481 crio[3607]: time="2025-12-01T19:20:01.962994137Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=dbc11f92-ab6a-477c-954b-136f4bb11c0e name=/runtime.v1.ImageService/PullImage
	Dec 01 19:20:05 functional-764481 crio[3607]: time="2025-12-01T19:20:05.963561158Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=954b425e-62b4-4812-a437-e1f7fa38da55 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ff95859e4c689       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   b7690d15ea9c2       kubernetes-dashboard-855c9754f9-ghrzx        kubernetes-dashboard
	393e8f2d98c07       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   7c1a55bc6d58e       dashboard-metrics-scraper-77bf4d6c4c-n5qjb   kubernetes-dashboard
	bcdd2840b338e       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   c1b61af37ba2e       sp-pod                                       default
	6e84fd8fe6004       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   90a5d7ef034ca       busybox-mount                                default
	f1a242a244baf       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   ddcf4a4d8aa90       nginx-svc                                    default
	7a9d2c40f8fc5       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   530496dd65e55       mysql-5bb876957f-mmnmh                       default
	bd8b89d11b0b0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                 10 minutes ago      Running             kube-apiserver              0                   a4421149ae7d5       kube-apiserver-functional-764481             kube-system
	77a79a0266f02       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                 10 minutes ago      Running             kube-controller-manager     2                   3a6cb36ac6d92       kube-controller-manager-functional-764481    kube-system
	aea25a614bd01       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 10 minutes ago      Running             etcd                        1                   81fee51148358       etcd-functional-764481                       kube-system
	f013c312ab53a       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                 10 minutes ago      Exited              kube-controller-manager     1                   3a6cb36ac6d92       kube-controller-manager-functional-764481    kube-system
	eb929b020cafe       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                 10 minutes ago      Running             kube-scheduler              1                   7b02736632e5f       kube-scheduler-functional-764481             kube-system
	537c86b3dafa2       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                 11 minutes ago      Running             kube-proxy                  1                   c5b8a82ac2908       kube-proxy-mwxzt                             kube-system
	d64bf5e8d24c3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   fe0364decd52c       kindnet-nlcrs                                kube-system
	575b29827acd8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   782f2364272a7       coredns-66bc5c9577-9m55c                     kube-system
	9465539f7dc17       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         1                   eca79c6752b16       storage-provisioner                          kube-system
	ef0f93766038b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   782f2364272a7       coredns-66bc5c9577-9m55c                     kube-system
	5eb4beb78cd1e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   eca79c6752b16       storage-provisioner                          kube-system
	104b14925a000       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   fe0364decd52c       kindnet-nlcrs                                kube-system
	6b1a5ffeb63a4       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                 12 minutes ago      Exited              kube-proxy                  0                   c5b8a82ac2908       kube-proxy-mwxzt                             kube-system
	dac6419a212ac       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 12 minutes ago      Exited              etcd                        0                   81fee51148358       etcd-functional-764481                       kube-system
	be3ea689a8666       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                 12 minutes ago      Exited              kube-scheduler              0                   7b02736632e5f       kube-scheduler-functional-764481             kube-system
	
	
	==> coredns [575b29827acd8e6753de2f7f58415140b8891f1831c2bbef113da96982175b5c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58137 - 44562 "HINFO IN 9199890593038521278.1546295060007948087. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025187445s
	
	
	==> coredns [ef0f93766038b4c0668aa33fe148cc2871e1778c549eafec3a9c5b01cb05f215] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38596 - 61117 "HINFO IN 6319466199157597251.5455004740386506976. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04957496s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-764481
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-764481
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=functional-764481
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T19_12_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 19:12:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-764481
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 19:24:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 19:24:15 +0000   Mon, 01 Dec 2025 19:12:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 19:24:15 +0000   Mon, 01 Dec 2025 19:12:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 19:24:15 +0000   Mon, 01 Dec 2025 19:12:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 19:24:15 +0000   Mon, 01 Dec 2025 19:12:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-764481
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                1090dc49-453b-4394-9be2-fc4cd05eaf36
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-snhth                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-6grpm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-mmnmh                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  kube-system                 coredns-66bc5c9577-9m55c                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-764481                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-nlcrs                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-764481              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-764481     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-mwxzt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-764481              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-n5qjb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ghrzx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-764481 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-764481 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-764481 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-764481 event: Registered Node functional-764481 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-764481 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-764481 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-764481 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-764481 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-764481 event: Registered Node functional-764481 in Controller
	
	
	==> dmesg <==
	[  +0.091158] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023654] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.003803] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 1 19:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.060605] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023816] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023874] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +2.047751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +4.031647] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +8.063094] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[Dec 1 19:09] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[ +32.252518] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	
	
	==> etcd [aea25a614bd013fd9e8e380d8af7ed45b9a8778a38ae93d0c074eac3c148d9cc] <==
	{"level":"warn","ts":"2025-12-01T19:13:41.768807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.774930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.782037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.789159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.795440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.801640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.808271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.814541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.820901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.827953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.834978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.841413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.847514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.863530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.869760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.875984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:13:41.923001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58414","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-01T19:14:11.401640Z","caller":"traceutil/trace.go:172","msg":"trace[1423051917] transaction","detail":"{read_only:false; response_revision:650; number_of_response:1; }","duration":"120.610112ms","start":"2025-12-01T19:14:11.281015Z","end":"2025-12-01T19:14:11.401625Z","steps":["trace[1423051917] 'process raft request'  (duration: 120.475685ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-01T19:14:12.899662Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.258196ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-01T19:14:12.899764Z","caller":"traceutil/trace.go:172","msg":"trace[1473206369] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:672; }","duration":"152.376013ms","start":"2025-12-01T19:14:12.747375Z","end":"2025-12-01T19:14:12.899751Z","steps":["trace[1473206369] 'range keys from in-memory index tree'  (duration: 152.150439ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-01T19:14:12.900092Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"179.752277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/myclaim\" limit:1 ","response":"range_response_count:1 size:1624"}
	{"level":"info","ts":"2025-12-01T19:14:12.900161Z","caller":"traceutil/trace.go:172","msg":"trace[1345911978] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/myclaim; range_end:; response_count:1; response_revision:672; }","duration":"179.890293ms","start":"2025-12-01T19:14:12.720260Z","end":"2025-12-01T19:14:12.900150Z","steps":["trace[1345911978] 'range keys from in-memory index tree'  (duration: 179.356368ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:23:41.460199Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1155}
	{"level":"info","ts":"2025-12-01T19:23:41.479082Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1155,"took":"18.514537ms","hash":2309571889,"current-db-size-bytes":3416064,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-12-01T19:23:41.479125Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2309571889,"revision":1155,"compact-revision":-1}
	
	
	==> etcd [dac6419a212ac58dc81ccfeb99325a25562353a267578d667bba000a0054c939] <==
	{"level":"warn","ts":"2025-12-01T19:12:06.201950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:12:06.210608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:12:06.217668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:12:06.224129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:12:06.249023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:12:06.256392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:12:06.309011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36114","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-01T19:13:20.583867Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-01T19:13:20.583943Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-764481","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-01T19:13:20.584061Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-01T19:13:27.585789Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-01T19:13:27.587074Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T19:13:27.587131Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-01T19:13:27.587168Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-01T19:13:27.587185Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-01T19:13:27.587159Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-01T19:13:27.587737Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-01T19:13:27.587769Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-01T19:13:27.587273Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-01T19:13:27.587792Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-01T19:13:27.587801Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T19:13:27.589785Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-01T19:13:27.589842Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T19:13:27.589864Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-01T19:13:27.589886Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-764481","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 19:24:22 up  1:06,  0 user,  load average: 0.38, 0.35, 0.40
	Linux functional-764481 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [104b14925a0007f841d18ccca106c332c6d109aaa176092b95e865fc0bd4e93c] <==
	I1201 19:12:16.001709       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 19:12:16.001968       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1201 19:12:16.002121       1 main.go:148] setting mtu 1500 for CNI 
	I1201 19:12:16.002140       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 19:12:16.002166       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T19:12:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 19:12:16.204813       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 19:12:16.205541       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 19:12:16.205559       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 19:12:16.205711       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1201 19:12:46.206271       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1201 19:12:46.206336       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1201 19:12:46.206363       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1201 19:12:46.206271       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1201 19:12:47.706586       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 19:12:47.706614       1 metrics.go:72] Registering metrics
	I1201 19:12:47.706690       1 controller.go:711] "Syncing nftables rules"
	I1201 19:12:56.209627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:12:56.209693       1 main.go:301] handling current node
	I1201 19:13:06.212236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:13:06.212273       1 main.go:301] handling current node
	I1201 19:13:16.204901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:13:16.204960       1 main.go:301] handling current node
	
	
	==> kindnet [d64bf5e8d24c3e16673e4e49e01c686f6c3acfc9d23c17edef4bb4e2f019a08f] <==
	I1201 19:22:21.791695       1 main.go:301] handling current node
	I1201 19:22:31.790594       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:22:31.790625       1 main.go:301] handling current node
	I1201 19:22:41.790372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:22:41.790409       1 main.go:301] handling current node
	I1201 19:22:51.794154       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:22:51.794184       1 main.go:301] handling current node
	I1201 19:23:01.790608       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:23:01.790643       1 main.go:301] handling current node
	I1201 19:23:11.795908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:23:11.795940       1 main.go:301] handling current node
	I1201 19:23:21.793072       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:23:21.793135       1 main.go:301] handling current node
	I1201 19:23:31.795376       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:23:31.795423       1 main.go:301] handling current node
	I1201 19:23:41.799409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:23:41.799440       1 main.go:301] handling current node
	I1201 19:23:51.793482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:23:51.793510       1 main.go:301] handling current node
	I1201 19:24:01.791077       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:24:01.791106       1 main.go:301] handling current node
	I1201 19:24:11.791158       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:24:11.791196       1 main.go:301] handling current node
	I1201 19:24:21.789928       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:24:21.789956       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bd8b89d11b0b04f0f9b6c2ec63025f975800d75794d9a3995d54d712849a2211] <==
	I1201 19:13:42.992966       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 19:13:42.992966       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 19:13:43.262218       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1201 19:13:43.466835       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1201 19:13:43.467954       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 19:13:43.472988       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 19:13:43.805198       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1201 19:13:43.892759       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 19:13:43.938497       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 19:13:43.943437       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 19:13:46.080521       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 19:14:00.165264       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.133.253"}
	I1201 19:14:04.690895       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.55.39"}
	I1201 19:14:06.196973       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.153.40"}
	E1201 19:14:18.835956       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38930: use of closed network connection
	E1201 19:14:20.296146       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38966: use of closed network connection
	I1201 19:14:20.521410       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.84.154"}
	I1201 19:14:22.541680       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.34.173"}
	E1201 19:14:22.578664       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56494: use of closed network connection
	E1201 19:14:24.736263       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56564: use of closed network connection
	I1201 19:14:31.015418       1 controller.go:667] quota admission added evaluator for: namespaces
	I1201 19:14:31.117705       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.77.39"}
	I1201 19:14:31.131259       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.154.20"}
	E1201 19:14:32.367708       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56636: use of closed network connection
	I1201 19:23:42.299526       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [77a79a0266f020b09230c30eb57cef34a1377fce1f3b8ebe4d7e4839e3aeaccd] <==
	I1201 19:13:45.726729       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1201 19:13:45.726741       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1201 19:13:45.726762       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1201 19:13:45.726775       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1201 19:13:45.726726       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1201 19:13:45.726788       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1201 19:13:45.727328       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1201 19:13:45.727949       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1201 19:13:45.731233       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1201 19:13:45.731336       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 19:13:45.731380       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 19:13:45.731392       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 19:13:45.731400       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 19:13:45.738447       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 19:13:45.740602       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1201 19:13:45.746909       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 19:13:45.746923       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1201 19:13:45.746930       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1201 19:13:45.749097       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1201 19:14:31.067023       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1201 19:14:31.070387       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1201 19:14:31.073925       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1201 19:14:31.075831       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1201 19:14:31.077951       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1201 19:14:31.081902       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [f013c312ab53a8657a52e86437ae3bdec6fc71ffa5d7604e57b8e975fe8a4ed0] <==
	I1201 19:13:30.325095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node"
	I1201 19:13:30.326596       1 node_lifecycle_controller.go:419] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1201 19:13:30.326647       1 controllermanager.go:781] "Started controller" controller="node-lifecycle-controller"
	I1201 19:13:30.326660       1 controllermanager.go:744] "Warning: controller is disabled" controller="selinux-warning-controller"
	I1201 19:13:30.326725       1 node_lifecycle_controller.go:453] "Sending events to api server" logger="node-lifecycle-controller"
	I1201 19:13:30.326779       1 node_lifecycle_controller.go:464] "Starting node controller" logger="node-lifecycle-controller"
	I1201 19:13:30.326791       1 shared_informer.go:349] "Waiting for caches to sync" controller="taint"
	I1201 19:13:30.328422       1 controllermanager.go:781] "Started controller" controller="endpointslice-controller"
	I1201 19:13:30.328523       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1201 19:13:30.328538       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint_slice"
	I1201 19:13:30.363736       1 shared_informer.go:356] "Caches are synced" controller="tokens"
	I1201 19:13:30.367040       1 controllermanager.go:781] "Started controller" controller="statefulset-controller"
	I1201 19:13:30.367203       1 stateful_set.go:169] "Starting stateful set controller" logger="statefulset-controller"
	I1201 19:13:30.367219       1 shared_informer.go:349] "Waiting for caches to sync" controller="stateful set"
	I1201 19:13:30.406350       1 shared_informer.go:356] "Caches are synced" controller="token_cleaner"
	I1201 19:13:30.416480       1 controllermanager.go:781] "Started controller" controller="bootstrap-signer-controller"
	I1201 19:13:30.416545       1 shared_informer.go:349] "Waiting for caches to sync" controller="bootstrap_signer"
	I1201 19:13:30.466539       1 controllermanager.go:781] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1201 19:13:30.466561       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I1201 19:13:30.466614       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1201 19:13:30.466623       1 shared_informer.go:349] "Waiting for caches to sync" controller="PVC protection"
	I1201 19:13:30.621821       1 controllermanager.go:781] "Started controller" controller="namespace-controller"
	I1201 19:13:30.621890       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1201 19:13:30.621900       1 shared_informer.go:349] "Waiting for caches to sync" controller="namespace"
	F1201 19:13:30.664056       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/certificate-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [537c86b3dafa23cc02e0064cfc2ffe4ad685c0ecd0ad9578f313463acf817169] <==
	I1201 19:13:21.477151       1 server_linux.go:53] "Using iptables proxy"
	I1201 19:13:21.539015       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 19:13:21.639130       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 19:13:21.639184       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1201 19:13:21.639260       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 19:13:21.657826       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 19:13:21.657894       1 server_linux.go:132] "Using iptables Proxier"
	I1201 19:13:21.663017       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 19:13:21.663277       1 server.go:527] "Version info" version="v1.34.2"
	I1201 19:13:21.663334       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 19:13:21.664364       1 config.go:200] "Starting service config controller"
	I1201 19:13:21.664387       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 19:13:21.664409       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 19:13:21.664426       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 19:13:21.664470       1 config.go:106] "Starting endpoint slice config controller"
	I1201 19:13:21.664482       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 19:13:21.664505       1 config.go:309] "Starting node config controller"
	I1201 19:13:21.664524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 19:13:21.664530       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 19:13:21.765361       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 19:13:21.765419       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 19:13:21.765454       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [6b1a5ffeb63a466477c5b5c1a84964177374edb95d9b7791d0ce36f7adfdf39d] <==
	I1201 19:12:15.861993       1 server_linux.go:53] "Using iptables proxy"
	I1201 19:12:15.936952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 19:12:16.037573       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 19:12:16.037604       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1201 19:12:16.037672       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 19:12:16.055161       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 19:12:16.055209       1 server_linux.go:132] "Using iptables Proxier"
	I1201 19:12:16.060037       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 19:12:16.060361       1 server.go:527] "Version info" version="v1.34.2"
	I1201 19:12:16.060390       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 19:12:16.061795       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 19:12:16.061825       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 19:12:16.061825       1 config.go:200] "Starting service config controller"
	I1201 19:12:16.061854       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 19:12:16.061904       1 config.go:106] "Starting endpoint slice config controller"
	I1201 19:12:16.061937       1 config.go:309] "Starting node config controller"
	I1201 19:12:16.061945       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 19:12:16.061938       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 19:12:16.061952       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 19:12:16.162035       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 19:12:16.162076       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 19:12:16.162087       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [be3ea689a866608a3b3935a58142a8e242aa704a69c636c17730dfa229c7470b] <==
	E1201 19:12:06.713841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1201 19:12:06.713894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 19:12:06.713894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 19:12:07.538809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 19:12:07.574144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 19:12:07.577358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 19:12:07.608670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 19:12:07.695185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 19:12:07.716369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1201 19:12:07.831653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 19:12:07.833450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 19:12:07.855631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 19:12:07.893783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 19:12:07.905081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1201 19:12:07.907980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 19:12:07.917353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 19:12:07.951557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 19:12:07.960684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1201 19:12:09.910749       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 19:13:27.692833       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1201 19:13:27.692816       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 19:13:27.692902       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1201 19:13:27.692937       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1201 19:13:27.693017       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1201 19:13:27.693047       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [eb929b020cafec65d7bdeb9dfdaee181f3441421cdf778344e92073376292236] <==
	I1201 19:13:30.130703       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 19:13:30.131635       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 19:13:30.131795       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 19:13:30.231860       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1201 19:13:30.231892       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 19:13:30.231863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E1201 19:13:42.284542       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 19:13:42.284683       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 19:13:42.284751       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 19:13:42.284839       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 19:13:42.285019       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 19:13:42.285054       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 19:13:42.285072       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 19:13:42.285089       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 19:13:42.285106       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 19:13:42.285363       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1201 19:13:42.285484       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 19:13:42.285623       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 19:13:42.285670       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 19:13:42.285688       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1201 19:13:42.285704       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 19:13:42.285728       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 19:13:42.298597       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 19:13:42.299234       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1201 19:13:42.299339       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	
	
	==> kubelet <==
	Dec 01 19:21:35 functional-764481 kubelet[4337]: E1201 19:21:35.963018    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	Dec 01 19:21:47 functional-764481 kubelet[4337]: E1201 19:21:47.963278    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	Dec 01 19:21:48 functional-764481 kubelet[4337]: E1201 19:21:48.962385    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-snhth" podUID="15032c92-8846-45ce-a584-c6929c96fcaa"
	Dec 01 19:22:01 functional-764481 kubelet[4337]: E1201 19:22:01.962721    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-snhth" podUID="15032c92-8846-45ce-a584-c6929c96fcaa"
	Dec 01 19:22:02 functional-764481 kubelet[4337]: E1201 19:22:02.962521    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	Dec 01 19:22:13 functional-764481 kubelet[4337]: E1201 19:22:13.963158    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-snhth" podUID="15032c92-8846-45ce-a584-c6929c96fcaa"
	Dec 01 19:22:16 functional-764481 kubelet[4337]: E1201 19:22:16.962626    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	Dec 01 19:22:25 functional-764481 kubelet[4337]: E1201 19:22:25.962828    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-snhth" podUID="15032c92-8846-45ce-a584-c6929c96fcaa"
	Dec 01 19:22:30 functional-764481 kubelet[4337]: E1201 19:22:30.963172    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	Dec 01 19:22:39 functional-764481 kubelet[4337]: E1201 19:22:39.963552    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-snhth" podUID="15032c92-8846-45ce-a584-c6929c96fcaa"
	Dec 01 19:22:43 functional-764481 kubelet[4337]: E1201 19:22:43.963000    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	Dec 01 19:22:52 functional-764481 kubelet[4337]: E1201 19:22:52.962744    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-snhth" podUID="15032c92-8846-45ce-a584-c6929c96fcaa"
	Dec 01 19:22:58 functional-764481 kubelet[4337]: E1201 19:22:58.962483    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	Dec 01 19:23:05 functional-764481 kubelet[4337]: E1201 19:23:05.962892    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-snhth" podUID="15032c92-8846-45ce-a584-c6929c96fcaa"
	Dec 01 19:23:11 functional-764481 kubelet[4337]: E1201 19:23:11.963385    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	Dec 01 19:23:17 functional-764481 kubelet[4337]: E1201 19:23:17.963379    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-snhth" podUID="15032c92-8846-45ce-a584-c6929c96fcaa"
	Dec 01 19:23:25 functional-764481 kubelet[4337]: E1201 19:23:25.962736    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	Dec 01 19:23:32 functional-764481 kubelet[4337]: E1201 19:23:32.962232    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-snhth" podUID="15032c92-8846-45ce-a584-c6929c96fcaa"
	Dec 01 19:23:37 functional-764481 kubelet[4337]: E1201 19:23:37.963130    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	Dec 01 19:23:44 functional-764481 kubelet[4337]: E1201 19:23:44.962404    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-snhth" podUID="15032c92-8846-45ce-a584-c6929c96fcaa"
	Dec 01 19:23:52 functional-764481 kubelet[4337]: E1201 19:23:52.963138    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	Dec 01 19:23:59 functional-764481 kubelet[4337]: E1201 19:23:59.963746    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-snhth" podUID="15032c92-8846-45ce-a584-c6929c96fcaa"
	Dec 01 19:24:05 functional-764481 kubelet[4337]: E1201 19:24:05.964097    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	Dec 01 19:24:13 functional-764481 kubelet[4337]: E1201 19:24:13.962727    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-snhth" podUID="15032c92-8846-45ce-a584-c6929c96fcaa"
	Dec 01 19:24:18 functional-764481 kubelet[4337]: E1201 19:24:18.962491    4337 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6grpm" podUID="cd528b94-b6be-45b4-83db-f76e04d3ff0e"
	
	
	==> kubernetes-dashboard [ff95859e4c689030cbe6517fa4f5d8dd653b8b0e514a6167385a313020b7a867] <==
	2025/12/01 19:14:36 Starting overwatch
	2025/12/01 19:14:36 Using namespace: kubernetes-dashboard
	2025/12/01 19:14:36 Using in-cluster config to connect to apiserver
	2025/12/01 19:14:36 Using secret token for csrf signing
	2025/12/01 19:14:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/01 19:14:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/01 19:14:36 Successful initial request to the apiserver, version: v1.34.2
	2025/12/01 19:14:36 Generating JWE encryption key
	2025/12/01 19:14:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/01 19:14:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/01 19:14:36 Initializing JWE encryption key from synchronized object
	2025/12/01 19:14:36 Creating in-cluster Sidecar client
	2025/12/01 19:14:36 Successful request to sidecar
	2025/12/01 19:14:36 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [5eb4beb78cd1ef79d4d372561b87fe9361032b9a862b73c064fd221a7bdba0e2] <==
	W1201 19:12:57.123110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:12:57.127992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 19:12:57.222490       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-764481_2212c6f6-c964-4337-9534-5395e2f33f0f!
	W1201 19:12:59.131876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:12:59.137449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:01.140420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:01.146068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:03.148338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:03.152786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:05.156097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:05.160838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:07.164471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:07.169515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:09.172170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:09.176852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:11.179621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:11.183445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:13.186686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:13.197238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:15.200947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:15.205056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:17.208562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:17.212263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:19.215484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:13:19.219666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9465539f7dc176d18fd507ec99f8a3d19813fe465723d5599443476633d435c4] <==
	W1201 19:23:57.553258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:23:59.556062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:23:59.559757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:01.562632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:01.567747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:03.571262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:03.575184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:05.578465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:05.582478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:07.585463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:07.589358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:09.592297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:09.596901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:11.599319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:11.604043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:13.606880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:13.610275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:15.613256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:15.617720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:17.620727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:17.624331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:19.627586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:19.632071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:21.635504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:24:21.641188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-764481 -n functional-764481
helpers_test.go:269: (dbg) Run:  kubectl --context functional-764481 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-snhth hello-node-connect-7d85dfc575-6grpm
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-764481 describe pod busybox-mount hello-node-75c85bcc94-snhth hello-node-connect-7d85dfc575-6grpm
helpers_test.go:290: (dbg) kubectl --context functional-764481 describe pod busybox-mount hello-node-75c85bcc94-snhth hello-node-connect-7d85dfc575-6grpm:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-764481/192.168.49.2
	Start Time:       Mon, 01 Dec 2025 19:14:08 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://6e84fd8fe600408b9930ceba32c9e559f6e3dba222116c9bced6c0e73fe06e7d
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 01 Dec 2025 19:14:14 +0000
	      Finished:     Mon, 01 Dec 2025 19:14:14 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h4kcc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-h4kcc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-764481
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.294s (4.724s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-snhth
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-764481/192.168.49.2
	Start Time:       Mon, 01 Dec 2025 19:14:22 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qxqnb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qxqnb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-snhth to functional-764481
	  Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m43s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-6grpm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-764481/192.168.49.2
	Start Time:       Mon, 01 Dec 2025 19:14:20 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82nxq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-82nxq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6grpm to functional-764481
	  Normal   Pulling    7m5s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m5s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m5s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     5m3s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m51s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-764481 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-764481 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-snhth" [15032c92-8846-45ce-a584-c6929c96fcaa] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-764481 -n functional-764481
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-01 19:24:22.888479124 +0000 UTC m=+1115.676543730
functional_test.go:1460: (dbg) Run:  kubectl --context functional-764481 describe po hello-node-75c85bcc94-snhth -n default
functional_test.go:1460: (dbg) kubectl --context functional-764481 describe po hello-node-75c85bcc94-snhth -n default:
Name:             hello-node-75c85bcc94-snhth
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-764481/192.168.49.2
Start Time:       Mon, 01 Dec 2025 19:14:22 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qxqnb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qxqnb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-snhth to functional-764481
Normal   Pulling    7m7s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-764481 logs hello-node-75c85bcc94-snhth -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-764481 logs hello-node-75c85bcc94-snhth -n default: exit status 1 (62.818233ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-snhth" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-764481 logs hello-node-75c85bcc94-snhth -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image load --daemon kicbase/echo-server:functional-764481 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image ls
I1201 19:14:25.276026   16873 detect.go:223] nested VM detected
functional_test.go:461: expected "kicbase/echo-server:functional-764481" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image load --daemon kicbase/echo-server:functional-764481 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-764481" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-764481
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image load --daemon kicbase/echo-server:functional-764481 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-764481" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image save kicbase/echo-server:functional-764481 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1201 19:14:28.228956   55520 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:14:28.229120   55520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:14:28.229130   55520 out.go:374] Setting ErrFile to fd 2...
	I1201 19:14:28.229134   55520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:14:28.229324   55520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:14:28.229835   55520 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:14:28.229925   55520 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:14:28.230322   55520 cli_runner.go:164] Run: docker container inspect functional-764481 --format={{.State.Status}}
	I1201 19:14:28.249185   55520 ssh_runner.go:195] Run: systemctl --version
	I1201 19:14:28.249228   55520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-764481
	I1201 19:14:28.266900   55520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/functional-764481/id_rsa Username:docker}
	I1201 19:14:28.363760   55520 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1201 19:14:28.363823   55520 cache_images.go:255] Failed to load cached images for "functional-764481": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1201 19:14:28.363847   55520 cache_images.go:267] failed pushing to: functional-764481

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-764481
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image save --daemon kicbase/echo-server:functional-764481 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-764481
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-764481: exit status 1 (17.347762ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-764481

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-764481

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 service --namespace=default --https --url hello-node: exit status 115 (532.879621ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30913
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-764481 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 service hello-node --url --format={{.IP}}: exit status 115 (535.490005ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-764481 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 service hello-node --url: exit status 115 (541.490564ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30913
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-764481 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30913
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-415638 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-415638 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-f65v5" [c4093869-0490-4a13-9d6d-5115470af39c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
2025/12/01 19:26:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-415638 -n functional-415638
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-01 19:36:55.988343035 +0000 UTC m=+1868.776407631
functional_test.go:1645: (dbg) Run:  kubectl --context functional-415638 describe po hello-node-connect-9f67c86d4-f65v5 -n default
functional_test.go:1645: (dbg) kubectl --context functional-415638 describe po hello-node-connect-9f67c86d4-f65v5 -n default:
Name:             hello-node-connect-9f67c86d4-f65v5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-415638/192.168.49.2
Start Time:       Mon, 01 Dec 2025 19:26:55 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mjl84 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mjl84:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-f65v5 to functional-415638
Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m57s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m57s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-415638 logs hello-node-connect-9f67c86d4-f65v5 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-415638 logs hello-node-connect-9f67c86d4-f65v5 -n default: exit status 1 (58.459429ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-f65v5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-415638 logs hello-node-connect-9f67c86d4-f65v5 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-415638 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-f65v5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-415638/192.168.49.2
Start Time:       Mon, 01 Dec 2025 19:26:55 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mjl84 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mjl84:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-f65v5 to functional-415638
Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m57s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m57s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-415638 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-415638 logs -l app=hello-node-connect: exit status 1 (59.740969ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-f65v5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-415638 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-415638 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.120.154
IPs:                      10.111.120.154
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31454/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-415638
helpers_test.go:243: (dbg) docker inspect functional-415638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e44b20aa382d63e0ea8a23e7445d2181dab3535d43600af3b957b2515b90663",
	        "Created": "2025-12-01T19:24:34.501117221Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 63599,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T19:24:34.532855366Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/7e44b20aa382d63e0ea8a23e7445d2181dab3535d43600af3b957b2515b90663/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e44b20aa382d63e0ea8a23e7445d2181dab3535d43600af3b957b2515b90663/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e44b20aa382d63e0ea8a23e7445d2181dab3535d43600af3b957b2515b90663/hosts",
	        "LogPath": "/var/lib/docker/containers/7e44b20aa382d63e0ea8a23e7445d2181dab3535d43600af3b957b2515b90663/7e44b20aa382d63e0ea8a23e7445d2181dab3535d43600af3b957b2515b90663-json.log",
	        "Name": "/functional-415638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-415638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-415638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7e44b20aa382d63e0ea8a23e7445d2181dab3535d43600af3b957b2515b90663",
	                "LowerDir": "/var/lib/docker/overlay2/b519b7691571ff03c33fe9b8e775954713dbb3c9c436daf51bcd6a421a9b1384-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b519b7691571ff03c33fe9b8e775954713dbb3c9c436daf51bcd6a421a9b1384/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b519b7691571ff03c33fe9b8e775954713dbb3c9c436daf51bcd6a421a9b1384/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b519b7691571ff03c33fe9b8e775954713dbb3c9c436daf51bcd6a421a9b1384/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-415638",
	                "Source": "/var/lib/docker/volumes/functional-415638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-415638",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-415638",
	                "name.minikube.sigs.k8s.io": "functional-415638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d83c84452863c1a42ce2c1bfdb53b771ca7b63513cd698edf571400287503347",
	            "SandboxKey": "/var/run/docker/netns/d83c84452863",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-415638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b082fa2c63c4b0fa5744ef685cce2f0072ea127e899f4de2cd011af841d05282",
	                    "EndpointID": "653b9ed80994960dd18e1602d01338591a48a8e1c5fa1a3e75641c18b88a629f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "9e:7f:0a:46:59:f2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-415638",
	                        "7e44b20aa382"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-415638 -n functional-415638
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-415638 logs -n 25: (1.246573678s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ dashboard      │ --url --port 36195 -p functional-415638 --alsologtostderr -v=1                                         │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:26 UTC │ 01 Dec 25 19:26 UTC │
	│ addons         │ functional-415638 addons list                                                                          │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:26 UTC │ 01 Dec 25 19:26 UTC │
	│ addons         │ functional-415638 addons list -o json                                                                  │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:26 UTC │ 01 Dec 25 19:26 UTC │
	│ ssh            │ functional-415638 ssh sudo cat /etc/ssl/certs/16873.pem                                                │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:26 UTC │ 01 Dec 25 19:26 UTC │
	│ ssh            │ functional-415638 ssh sudo cat /usr/share/ca-certificates/16873.pem                                    │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:26 UTC │ 01 Dec 25 19:26 UTC │
	│ ssh            │ functional-415638 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:26 UTC │ 01 Dec 25 19:26 UTC │
	│ ssh            │ functional-415638 ssh sudo cat /etc/ssl/certs/168732.pem                                               │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:26 UTC │ 01 Dec 25 19:26 UTC │
	│ ssh            │ functional-415638 ssh sudo cat /usr/share/ca-certificates/168732.pem                                   │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:26 UTC │ 01 Dec 25 19:26 UTC │
	│ ssh            │ functional-415638 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:26 UTC │ 01 Dec 25 19:27 UTC │
	│ ssh            │ functional-415638 ssh sudo cat /etc/test/nested/copy/16873/hosts                                       │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:27 UTC │ 01 Dec 25 19:27 UTC │
	│ image          │ functional-415638 image ls --format short --alsologtostderr                                            │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:27 UTC │ 01 Dec 25 19:27 UTC │
	│ image          │ functional-415638 image ls --format yaml --alsologtostderr                                             │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:27 UTC │ 01 Dec 25 19:27 UTC │
	│ ssh            │ functional-415638 ssh pgrep buildkitd                                                                  │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:27 UTC │                     │
	│ image          │ functional-415638 image build -t localhost/my-image:functional-415638 testdata/build --alsologtostderr │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:27 UTC │ 01 Dec 25 19:27 UTC │
	│ image          │ functional-415638 image ls                                                                             │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:27 UTC │ 01 Dec 25 19:27 UTC │
	│ image          │ functional-415638 image ls --format json --alsologtostderr                                             │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:27 UTC │ 01 Dec 25 19:27 UTC │
	│ image          │ functional-415638 image ls --format table --alsologtostderr                                            │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:27 UTC │ 01 Dec 25 19:27 UTC │
	│ update-context │ functional-415638 update-context --alsologtostderr -v=2                                                │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:27 UTC │ 01 Dec 25 19:27 UTC │
	│ update-context │ functional-415638 update-context --alsologtostderr -v=2                                                │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:27 UTC │ 01 Dec 25 19:27 UTC │
	│ update-context │ functional-415638 update-context --alsologtostderr -v=2                                                │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:27 UTC │ 01 Dec 25 19:27 UTC │
	│ service        │ functional-415638 service list                                                                         │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:36 UTC │ 01 Dec 25 19:36 UTC │
	│ service        │ functional-415638 service list -o json                                                                 │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:36 UTC │ 01 Dec 25 19:36 UTC │
	│ service        │ functional-415638 service --namespace=default --https --url hello-node                                 │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:36 UTC │                     │
	│ service        │ functional-415638 service hello-node --url --format={{.IP}}                                            │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:36 UTC │                     │
	│ service        │ functional-415638 service hello-node --url                                                             │ functional-415638 │ jenkins │ v1.37.0 │ 01 Dec 25 19:36 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 19:26:42
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 19:26:42.152782   75001 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:26:42.153068   75001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:26:42.153078   75001 out.go:374] Setting ErrFile to fd 2...
	I1201 19:26:42.153083   75001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:26:42.153332   75001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:26:42.153775   75001 out.go:368] Setting JSON to false
	I1201 19:26:42.154675   75001 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4153,"bootTime":1764613049,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:26:42.154731   75001 start.go:143] virtualization: kvm guest
	I1201 19:26:42.156632   75001 out.go:179] * [functional-415638] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:26:42.157849   75001 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:26:42.157871   75001 notify.go:221] Checking for updates...
	I1201 19:26:42.159988   75001 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:26:42.161313   75001 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:26:42.162503   75001 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 19:26:42.163543   75001 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:26:42.164582   75001 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:26:42.166242   75001 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 19:26:42.166778   75001 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:26:42.191601   75001 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 19:26:42.191760   75001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:26:42.256163   75001 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-01 19:26:42.245634936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:26:42.256279   75001 docker.go:319] overlay module found
	I1201 19:26:42.258443   75001 out.go:179] * Using the docker driver based on existing profile
	I1201 19:26:42.259400   75001 start.go:309] selected driver: docker
	I1201 19:26:42.259412   75001 start.go:927] validating driver "docker" against &{Name:functional-415638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-415638 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:26:42.259503   75001 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:26:42.259583   75001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:26:42.315177   75001 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-01 19:26:42.305565648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:26:42.315807   75001 cni.go:84] Creating CNI manager for ""
	I1201 19:26:42.315867   75001 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 19:26:42.315908   75001 start.go:353] cluster config:
	{Name:functional-415638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-415638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:26:42.317772   75001 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 01 19:27:01 functional-415638 crio[4581]: time="2025-12-01T19:27:01.04320153Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Dec 01 19:27:07 functional-415638 crio[4581]: time="2025-12-01T19:27:07.09162388Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" id=233709d4-1dcf-474d-a1d6-8b74244c9c41 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:27:07 functional-415638 crio[4581]: time="2025-12-01T19:27:07.092442422Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=9f939a58-cb4b-47db-81cd-0aa1836add98 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 19:27:07 functional-415638 crio[4581]: time="2025-12-01T19:27:07.095101325Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=f3ff5ac3-64bf-4c7a-9754-575e62c4096e name=/runtime.v1.ImageService/ImageStatus
	Dec 01 19:27:07 functional-415638 crio[4581]: time="2025-12-01T19:27:07.099702785Z" level=info msg="Creating container: default/mysql-844cf969f6-5tdph/mysql" id=028bfbba-e109-4d72-8d12-2533a2e241d6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 19:27:07 functional-415638 crio[4581]: time="2025-12-01T19:27:07.099839073Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 19:27:07 functional-415638 crio[4581]: time="2025-12-01T19:27:07.105190644Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 19:27:07 functional-415638 crio[4581]: time="2025-12-01T19:27:07.105982348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 19:27:07 functional-415638 crio[4581]: time="2025-12-01T19:27:07.136450991Z" level=info msg="Created container 7020be0e57cce1fcd1b1d0c364361938bb5cab935631ea1281e0a3ecd39693be: default/mysql-844cf969f6-5tdph/mysql" id=028bfbba-e109-4d72-8d12-2533a2e241d6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 19:27:07 functional-415638 crio[4581]: time="2025-12-01T19:27:07.137071084Z" level=info msg="Starting container: 7020be0e57cce1fcd1b1d0c364361938bb5cab935631ea1281e0a3ecd39693be" id=3a0bd3da-0c39-4c12-978e-176afd3f02ff name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 19:27:07 functional-415638 crio[4581]: time="2025-12-01T19:27:07.138696779Z" level=info msg="Started container" PID=8395 containerID=7020be0e57cce1fcd1b1d0c364361938bb5cab935631ea1281e0a3ecd39693be description=default/mysql-844cf969f6-5tdph/mysql id=3a0bd3da-0c39-4c12-978e-176afd3f02ff name=/runtime.v1.RuntimeService/StartContainer sandboxID=a734d2435361e6e465a3a90ce37a700af685bba85fb10638cd7b6bf08383acf2
	Dec 01 19:27:08 functional-415638 crio[4581]: time="2025-12-01T19:27:08.896796933Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8f2cf377-c0b2-4846-b3a7-b1c8bba57276 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:27:25 functional-415638 crio[4581]: time="2025-12-01T19:27:25.897419514Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4a404eee-8f97-4800-817c-c182807b3ea6 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:27:34 functional-415638 crio[4581]: time="2025-12-01T19:27:34.89658669Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=60310450-c973-4b87-a2f4-ddf9bebd90c4 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:27:56 functional-415638 crio[4581]: time="2025-12-01T19:27:56.880063368Z" level=info msg="Stopping pod sandbox: 5c6a33950e2d4b17ff13800940aa2dd18cef3b2be72a31503dbbea7978547dcf" id=df15370f-9bd1-4b8c-8d52-642176bc1592 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 19:27:56 functional-415638 crio[4581]: time="2025-12-01T19:27:56.880130594Z" level=info msg="Stopped pod sandbox (already stopped): 5c6a33950e2d4b17ff13800940aa2dd18cef3b2be72a31503dbbea7978547dcf" id=df15370f-9bd1-4b8c-8d52-642176bc1592 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 01 19:27:56 functional-415638 crio[4581]: time="2025-12-01T19:27:56.88045666Z" level=info msg="Removing pod sandbox: 5c6a33950e2d4b17ff13800940aa2dd18cef3b2be72a31503dbbea7978547dcf" id=18432a07-82f4-4327-962a-680d8d54fa39 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 19:27:56 functional-415638 crio[4581]: time="2025-12-01T19:27:56.88352037Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 19:27:56 functional-415638 crio[4581]: time="2025-12-01T19:27:56.8835735Z" level=info msg="Removed pod sandbox: 5c6a33950e2d4b17ff13800940aa2dd18cef3b2be72a31503dbbea7978547dcf" id=18432a07-82f4-4327-962a-680d8d54fa39 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 01 19:28:17 functional-415638 crio[4581]: time="2025-12-01T19:28:17.896062367Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e4462c0b-01c6-4d98-a5a9-7fed55664339 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:28:20 functional-415638 crio[4581]: time="2025-12-01T19:28:20.896479388Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6cab09fb-a89a-4bf6-8e75-35255d54a2d4 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:29:48 functional-415638 crio[4581]: time="2025-12-01T19:29:48.896397397Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=14f2dc6e-859c-4bdd-ae25-c02e8357d4e5 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:29:50 functional-415638 crio[4581]: time="2025-12-01T19:29:50.897193478Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d4529340-cafc-4a51-99de-948005b03182 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:32:37 functional-415638 crio[4581]: time="2025-12-01T19:32:37.896897288Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=17258d11-6df3-4989-b808-23be638639c2 name=/runtime.v1.ImageService/PullImage
	Dec 01 19:32:39 functional-415638 crio[4581]: time="2025-12-01T19:32:39.896749767Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c6402842-2a34-429e-9633-e5d16e90fbd9 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7020be0e57cce       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   a734d2435361e       mysql-844cf969f6-5tdph                       default
	f519112b8da23       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   2b1d4978ffed3       sp-pod                                       default
	775532d80531c       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   356549006b7b9       dashboard-metrics-scraper-5565989548-944gt   kubernetes-dashboard
	e14aed35b37aa       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         10 minutes ago      Running             kubernetes-dashboard        0                   c72d9c344045d       kubernetes-dashboard-b84665fb8-ndpz2         kubernetes-dashboard
	343b196420961       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   360d097d4b162       nginx-svc                                    default
	0b7a646747ff5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   4aae669de6c63       busybox-mount                                default
	a96a3bac3fc3b       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 10 minutes ago      Running             kube-controller-manager     2                   08448d7c673ea       kube-controller-manager-functional-415638    kube-system
	9512e44950d22       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                                 10 minutes ago      Running             kube-apiserver              2                   fdfe6fcf65a37       kube-apiserver-functional-415638             kube-system
	0d1926f48f317       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                                 10 minutes ago      Exited              kube-apiserver              1                   fdfe6fcf65a37       kube-apiserver-functional-415638             kube-system
	85f9922a6d78e       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 10 minutes ago      Exited              kube-controller-manager     1                   08448d7c673ea       kube-controller-manager-functional-415638    kube-system
	91aa745fa432b       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 11 minutes ago      Running             kube-scheduler              1                   716986c06352e       kube-scheduler-functional-415638             kube-system
	fd790bf2302f5       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 11 minutes ago      Running             etcd                        1                   60117af950c64       etcd-functional-415638                       kube-system
	1aa9fe82abe9c       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 11 minutes ago      Running             coredns                     1                   ab5e8e23f9844       coredns-7d764666f9-s74gd                     kube-system
	497effa1a3de2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         1                   62eb409449b0f       storage-provisioner                          kube-system
	26e40ed210c2c       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 11 minutes ago      Running             kube-proxy                  1                   74e265f5951d5       kube-proxy-x2n8c                             kube-system
	6bda176030fa0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   153c84299f169       kindnet-kvgvp                                kube-system
	4aad1ea23ab81       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 11 minutes ago      Exited              coredns                     0                   ab5e8e23f9844       coredns-7d764666f9-s74gd                     kube-system
	7c83f190a983b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   62eb409449b0f       storage-provisioner                          kube-system
	c59a775ace400       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11               11 minutes ago      Exited              kindnet-cni                 0                   153c84299f169       kindnet-kvgvp                                kube-system
	959e2fbf4acb3       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 11 minutes ago      Exited              kube-proxy                  0                   74e265f5951d5       kube-proxy-x2n8c                             kube-system
	c5f5d3beba4d7       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 12 minutes ago      Exited              etcd                        0                   60117af950c64       etcd-functional-415638                       kube-system
	b899f3cfc7045       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 12 minutes ago      Exited              kube-scheduler              0                   716986c06352e       kube-scheduler-functional-415638             kube-system
	
	
	==> coredns [1aa9fe82abe9cb277237a9b6b58dac1ffa532c7088bca7b4a2fd7ad3bd7a4522] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52024 - 52935 "HINFO IN 5756857573694729261.2082401248483724200. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.055224342s
	
	
	==> coredns [4aad1ea23ab81b4064d588670fbcff69827e68ea2535fd084c46676a27fc2164] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:45569 - 64814 "HINFO IN 4419857117877318191.5455402230742725346. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020871186s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-415638
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-415638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=functional-415638
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T19_24_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 19:24:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-415638
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 19:36:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 19:36:07 +0000   Mon, 01 Dec 2025 19:24:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 19:36:07 +0000   Mon, 01 Dec 2025 19:24:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 19:36:07 +0000   Mon, 01 Dec 2025 19:24:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 19:36:07 +0000   Mon, 01 Dec 2025 19:25:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-415638
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                84adb004-1ef5-405c-a54b-3aedb1b2ad05
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-pvb47                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-f65v5            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-5tdph                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m57s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	  kube-system                 coredns-7d764666f9-s74gd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-415638                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-kvgvp                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-415638              250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-415638     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-x2n8c                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-415638              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-944gt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-ndpz2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  11m   node-controller  Node functional-415638 event: Registered Node functional-415638 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-415638 event: Registered Node functional-415638 in Controller
	
	
	==> dmesg <==
	[  +0.091158] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023654] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.003803] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 1 19:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.060605] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023816] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023874] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +2.047751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +4.031647] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +8.063094] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[Dec 1 19:09] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[ +32.252518] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	
	
	==> etcd [c5f5d3beba4d7950b9e7d97accd7abc424beb1df373728a4283361bc0a83d312] <==
	{"level":"warn","ts":"2025-12-01T19:24:50.821668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:24:50.868162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:24:52.419849Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.488815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-01T19:24:52.419953Z","caller":"traceutil/trace.go:172","msg":"trace[1757019904] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:105; }","duration":"106.604719ms","start":"2025-12-01T19:24:52.313333Z","end":"2025-12-01T19:24:52.419938Z","steps":["trace[1757019904] 'agreement among raft nodes before linearized reading'  (duration: 45.493473ms)","trace[1757019904] 'range keys from in-memory index tree'  (duration: 60.963291ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-01T19:24:52.419958Z","caller":"traceutil/trace.go:172","msg":"trace[270334337] transaction","detail":"{read_only:false; response_revision:106; number_of_response:1; }","duration":"114.458948ms","start":"2025-12-01T19:24:52.305484Z","end":"2025-12-01T19:24:52.419943Z","steps":["trace[270334337] 'process raft request'  (duration: 53.360895ms)","trace[270334337] 'compare'  (duration: 60.977576ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-01T19:24:52.650895Z","caller":"traceutil/trace.go:172","msg":"trace[1115465748] transaction","detail":"{read_only:false; response_revision:108; number_of_response:1; }","duration":"161.549236ms","start":"2025-12-01T19:24:52.489331Z","end":"2025-12-01T19:24:52.650880Z","steps":["trace[1115465748] 'process raft request'  (duration: 81.768054ms)","trace[1115465748] 'compare'  (duration: 79.68527ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-01T19:24:52.841916Z","caller":"traceutil/trace.go:172","msg":"trace[1910653485] transaction","detail":"{read_only:false; response_revision:110; number_of_response:1; }","duration":"124.179806ms","start":"2025-12-01T19:24:52.717721Z","end":"2025-12-01T19:24:52.841901Z","steps":["trace[1910653485] 'process raft request'  (duration: 61.955842ms)","trace[1910653485] 'compare'  (duration: 62.142394ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-01T19:25:38.011467Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-01T19:25:38.011548Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-415638","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-01T19:25:38.011687Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-01T19:25:45.013108Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-01T19:25:45.013198Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T19:25:45.013224Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-01T19:25:45.013260Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-01T19:25:45.013272Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-01T19:25:45.013278Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-01T19:25:45.013314Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-01T19:25:45.013376Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-01T19:25:45.013383Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-01T19:25:45.013387Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-01T19:25:45.013394Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T19:25:45.015667Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-01T19:25:45.015740Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-01T19:25:45.015770Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-01T19:25:45.015812Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-415638","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [fd790bf2302f5333857b526e596f6c56b757b39e4dd02897f48020013e156e15] <==
	{"level":"warn","ts":"2025-12-01T19:26:09.675891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.681907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.688212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.694033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.700593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.706746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.713008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.719169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.725789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.734432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.741248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.747602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.753918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.769499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.772711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.778798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.785076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.791791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:26:09.838680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T19:27:00.824180Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.571641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/mysql-844cf969f6-5tdph\" limit:1 ","response":"range_response_count:1 size:1971"}
	{"level":"info","ts":"2025-12-01T19:27:00.824254Z","caller":"traceutil/trace.go:172","msg":"trace[390433486] transaction","detail":"{read_only:false; response_revision:855; number_of_response:1; }","duration":"159.414981ms","start":"2025-12-01T19:27:00.664812Z","end":"2025-12-01T19:27:00.824227Z","steps":["trace[390433486] 'process raft request'  (duration: 77.073016ms)","trace[390433486] 'compare'  (duration: 82.221066ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-01T19:27:00.824314Z","caller":"traceutil/trace.go:172","msg":"trace[1375695277] range","detail":"{range_begin:/registry/pods/default/mysql-844cf969f6-5tdph; range_end:; response_count:1; response_revision:854; }","duration":"102.700583ms","start":"2025-12-01T19:27:00.721576Z","end":"2025-12-01T19:27:00.824276Z","steps":["trace[1375695277] 'agreement among raft nodes before linearized reading'  (duration: 20.282181ms)","trace[1375695277] 'range keys from in-memory index tree'  (duration: 82.232664ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-01T19:36:09.380969Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1173}
	{"level":"info","ts":"2025-12-01T19:36:09.400176Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1173,"took":"18.883321ms","hash":1814302359,"current-db-size-bytes":3514368,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1662976,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-12-01T19:36:09.400240Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1814302359,"revision":1173,"compact-revision":-1}
	
	
	==> kernel <==
	 19:36:57 up  1:19,  0 user,  load average: 0.25, 0.28, 0.41
	Linux functional-415638 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6bda176030fa0ac2ba82bc828e5893601fce865a2576f80020cb46caa1f57aaa] <==
	I1201 19:34:49.102377       1 main.go:301] handling current node
	I1201 19:34:59.096079       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:34:59.096116       1 main.go:301] handling current node
	I1201 19:35:09.094146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:35:09.094187       1 main.go:301] handling current node
	I1201 19:35:19.096133       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:35:19.096169       1 main.go:301] handling current node
	I1201 19:35:29.095633       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:35:29.095675       1 main.go:301] handling current node
	I1201 19:35:39.094473       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:35:39.094511       1 main.go:301] handling current node
	I1201 19:35:49.103092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:35:49.103130       1 main.go:301] handling current node
	I1201 19:35:59.094353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:35:59.094389       1 main.go:301] handling current node
	I1201 19:36:09.094717       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:36:09.094756       1 main.go:301] handling current node
	I1201 19:36:19.103088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:36:19.103126       1 main.go:301] handling current node
	I1201 19:36:29.094308       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:36:29.094339       1 main.go:301] handling current node
	I1201 19:36:39.095622       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:36:39.095654       1 main.go:301] handling current node
	I1201 19:36:49.094603       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:36:49.094641       1 main.go:301] handling current node
	
	
	==> kindnet [c59a775ace4005f1590ca81ad4de4f1be4ea91dfc04f879e9d4062da6b8d25d2] <==
	I1201 19:25:02.589532       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 19:25:02.589798       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1201 19:25:02.589972       1 main.go:148] setting mtu 1500 for CNI 
	I1201 19:25:02.589990       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 19:25:02.590016       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T19:25:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 19:25:02.787209       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 19:25:02.787280       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 19:25:02.787322       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 19:25:02.864543       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 19:25:03.087495       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 19:25:03.087517       1 metrics.go:72] Registering metrics
	I1201 19:25:03.087561       1 controller.go:711] "Syncing nftables rules"
	I1201 19:25:12.787906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:25:12.787997       1 main.go:301] handling current node
	I1201 19:25:22.788782       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:25:22.788831       1 main.go:301] handling current node
	I1201 19:25:32.791317       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1201 19:25:32.791350       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0d1926f48f317ea9e27e18e573594f1ac5d12c73337395e4c6da7de3739a38a5] <==
	I1201 19:25:58.077974       1 options.go:263] external host was not specified, using 192.168.49.2
	I1201 19:25:58.082051       1 server.go:150] Version: v1.35.0-beta.0
	I1201 19:25:58.082097       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1201 19:25:58.082467       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [9512e44950d224fe78a4cbcdaa0d9e856f2774eec3b3ed986310981c0efef4fc] <==
	I1201 19:26:10.299212       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 19:26:10.310679       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 19:26:10.383785       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 19:26:11.191949       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1201 19:26:11.398304       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1201 19:26:11.399501       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 19:26:11.403500       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 19:26:12.259160       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 19:26:13.571584       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 19:26:30.909393       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.171.137"}
	I1201 19:26:39.168620       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 19:26:39.267205       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.178.193"}
	I1201 19:26:47.195304       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.188.206"}
	I1201 19:26:51.851078       1 controller.go:667] quota admission added evaluator for: namespaces
	I1201 19:26:51.896853       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 19:26:51.974481       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.174.244"}
	I1201 19:26:51.987126       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.220.235"}
	I1201 19:26:55.670265       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.120.154"}
	E1201 19:26:58.767966       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41886: use of closed network connection
	I1201 19:27:00.526383       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.53.62"}
	E1201 19:27:07.672543       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36980: use of closed network connection
	E1201 19:27:13.653331       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56662: use of closed network connection
	E1201 19:27:15.090260       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56674: use of closed network connection
	E1201 19:27:15.935007       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56692: use of closed network connection
	I1201 19:36:10.211086       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [85f9922a6d78ecd1ca3ce7a85745da0f2949df331c7cc4a3f9b9273e6d909241] <==
	I1201 19:25:57.505182       1 serving.go:386] Generated self-signed cert in-memory
	I1201 19:25:57.512812       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1201 19:25:57.512836       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 19:25:57.514112       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1201 19:25:57.514148       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1201 19:25:57.514266       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1201 19:25:57.514343       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 19:25:58.523499       1 controller_descriptor.go:99] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I1201 19:25:58.523543       1 controllermanager.go:579] "Warning: skipping controller" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller"
	I1201 19:25:58.526409       1 controller_descriptor.go:99] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I1201 19:25:58.526430       1 controllermanager.go:579] "Warning: skipping controller" controller="device-taint-eviction-controller"
	I1201 19:25:58.548261       1 range_allocator.go:113] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1201 19:25:58.576540       1 controller_descriptor.go:107] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I1201 19:25:58.576557       1 controllermanager.go:579] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1201 19:25:58.610172       1 controller_descriptor.go:107] "Skipping a cloud provider controller" controller="service-lb-controller"
	I1201 19:25:58.610197       1 controllermanager.go:579] "Warning: skipping controller" controller="service-lb-controller"
	I1201 19:25:58.610208       1 controller_descriptor.go:99] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1201 19:25:58.610221       1 controllermanager.go:579] "Warning: skipping controller" controller="storageversion-garbage-collector-controller"
	I1201 19:25:58.627118       1 controllermanager.go:627] "Warning: controller is disabled" controller="selinux-warning-controller"
	I1201 19:25:58.676885       1 controller_descriptor.go:107] "Skipping a cloud provider controller" controller="node-route-controller"
	I1201 19:25:58.676905       1 controllermanager.go:579] "Warning: skipping controller" controller="node-route-controller"
	I1201 19:25:58.726479       1 controllermanager.go:579] "Warning: skipping controller" controller="storage-version-migrator-controller"
	E1201 19:25:58.924400       1 controllermanager.go:575] "Error initializing a controller" err="failed to create the discovery client: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller\": dial tcp 192.168.49.2:8441: connect: connection refused" controller="resourcequota-controller"
	E1201 19:25:58.924433       1 controllermanager.go:257] "Error building controllers" err="failed to create the discovery client: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [a96a3bac3fc3bab9107aea05be36446170cb15902ca7e8486ae10d61cb9c7fec] <==
	I1201 19:26:13.425552       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1201 19:26:13.425678       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-415638"
	I1201 19:26:13.425726       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1201 19:26:13.427896       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 19:26:13.428300       1 shared_informer.go:377] "Caches are synced"
	I1201 19:26:13.428550       1 shared_informer.go:377] "Caches are synced"
	I1201 19:26:13.428569       1 shared_informer.go:377] "Caches are synced"
	I1201 19:26:13.428681       1 shared_informer.go:377] "Caches are synced"
	I1201 19:26:13.428756       1 range_allocator.go:177] "Sending events to api server"
	I1201 19:26:13.428801       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1201 19:26:13.428810       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 19:26:13.428816       1 shared_informer.go:377] "Caches are synced"
	I1201 19:26:13.429578       1 shared_informer.go:377] "Caches are synced"
	I1201 19:26:13.524100       1 shared_informer.go:377] "Caches are synced"
	I1201 19:26:13.524119       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1201 19:26:13.524126       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1201 19:26:13.528416       1 shared_informer.go:377] "Caches are synced"
	E1201 19:26:51.897444       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1201 19:26:51.905030       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1201 19:26:51.909192       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1201 19:26:51.909331       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1201 19:26:51.920164       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1201 19:26:51.933964       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1201 19:26:51.933967       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1201 19:26:51.939087       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [26e40ed210c2c5ecba8a2d8776a47915472695302924725914facbf835383b69] <==
	I1201 19:25:38.818283       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 19:25:47.819196       1 shared_informer.go:377] "Caches are synced"
	I1201 19:25:47.819240       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1201 19:25:47.819411       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 19:25:47.839871       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 19:25:47.839933       1 server_linux.go:136] "Using iptables Proxier"
	I1201 19:25:47.846157       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 19:25:47.846474       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1201 19:25:47.846490       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 19:25:47.847970       1 config.go:200] "Starting service config controller"
	I1201 19:25:47.847982       1 config.go:106] "Starting endpoint slice config controller"
	I1201 19:25:47.848013       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 19:25:47.848015       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 19:25:47.847980       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 19:25:47.848035       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 19:25:47.848095       1 config.go:309] "Starting node config controller"
	I1201 19:25:47.848107       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 19:25:47.848116       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 19:25:47.948489       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 19:25:47.948568       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 19:25:47.948580       1 shared_informer.go:356] "Caches are synced" controller="service config"
	E1201 19:25:56.957880       1 reflector.go:204] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.EndpointSlice"
	E1201 19:25:56.957874       1 reflector.go:204] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ServiceCIDR"
	E1201 19:25:56.957882       1 reflector.go:204] "Failed to watch" err="nodes \"functional-415638\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1201 19:25:56.957957       1 reflector.go:204] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	
	
	==> kube-proxy [959e2fbf4acb36c10530edcf10025ce4dea999fa9f32e2d04ea031f2acede7b0] <==
	I1201 19:25:00.683057       1 server_linux.go:53] "Using iptables proxy"
	I1201 19:25:00.744904       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 19:25:00.845342       1 shared_informer.go:377] "Caches are synced"
	I1201 19:25:00.845402       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1201 19:25:00.845507       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 19:25:00.863653       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 19:25:00.863710       1 server_linux.go:136] "Using iptables Proxier"
	I1201 19:25:00.868837       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 19:25:00.869261       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1201 19:25:00.869299       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 19:25:00.870391       1 config.go:200] "Starting service config controller"
	I1201 19:25:00.870419       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 19:25:00.870452       1 config.go:106] "Starting endpoint slice config controller"
	I1201 19:25:00.870458       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 19:25:00.870503       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 19:25:00.870528       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 19:25:00.870589       1 config.go:309] "Starting node config controller"
	I1201 19:25:00.870643       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 19:25:00.870655       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 19:25:00.970935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 19:25:00.970935       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 19:25:00.970983       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [91aa745fa432b5311c12073893396d7c5b73cbc386dad976ca2324eefa18cbeb] <==
	I1201 19:25:47.859825       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 19:25:48.060146       1 shared_informer.go:377] "Caches are synced"
	I1201 19:25:48.060164       1 shared_informer.go:377] "Caches are synced"
	I1201 19:25:48.060250       1 shared_informer.go:377] "Caches are synced"
	E1201 19:25:56.933125       1 reflector.go:204] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1201 19:25:56.933322       1 reflector.go:204] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1201 19:25:56.933433       1 reflector.go:204] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1201 19:25:56.933491       1 reflector.go:204] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1201 19:25:56.933532       1 reflector.go:204] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1201 19:25:56.933589       1 reflector.go:204] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1201 19:25:56.933639       1 reflector.go:204] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1201 19:25:56.933681       1 reflector.go:204] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1201 19:25:56.933729       1 reflector.go:204] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1201 19:25:56.933779       1 reflector.go:204] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1201 19:25:56.933861       1 reflector.go:204] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1201 19:25:56.933887       1 reflector.go:204] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1201 19:25:56.933922       1 reflector.go:204] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1201 19:25:56.933968       1 reflector.go:204] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1201 19:25:56.934001       1 reflector.go:204] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1201 19:25:56.934030       1 reflector.go:204] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1201 19:25:56.934057       1 reflector.go:204] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1201 19:25:56.934078       1 reflector.go:204] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1201 19:25:56.945919       1 reflector.go:204] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1201 19:25:56.946155       1 reflector.go:204] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1201 19:25:56.946235       1 reflector.go:204] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	
	
	==> kube-scheduler [b899f3cfc7045dee12574f0cf40d855152c15ba634bdd79f800a2e679eb9d23b] <==
	E1201 19:24:52.401458       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1201 19:24:52.402303       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1201 19:24:52.408386       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1201 19:24:52.409195       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1201 19:24:52.417333       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1201 19:24:52.418199       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1201 19:24:52.455632       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1201 19:24:52.456683       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1201 19:24:52.480473       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1201 19:24:52.481445       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1201 19:24:52.558987       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1201 19:24:52.560032       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1201 19:24:52.567508       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1201 19:24:52.568587       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1201 19:24:52.773812       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1201 19:24:52.774812       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1201 19:24:52.787072       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1201 19:24:52.788260       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1201 19:24:55.861842       1 shared_informer.go:377] "Caches are synced"
	I1201 19:25:45.120958       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1201 19:25:45.121105       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1201 19:25:45.121134       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 19:25:45.121187       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1201 19:25:45.121194       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1201 19:25:45.121216       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 01 19:35:15 functional-415638 kubelet[5300]: E1201 19:35:15.896220    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-f65v5" podUID="c4093869-0490-4a13-9d6d-5115470af39c"
	Dec 01 19:35:16 functional-415638 kubelet[5300]: E1201 19:35:16.897140    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-pvb47" podUID="a042cd3f-1f1a-433f-80cb-b05f1c3a53d0"
	Dec 01 19:35:28 functional-415638 kubelet[5300]: E1201 19:35:28.896658    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-f65v5" podUID="c4093869-0490-4a13-9d6d-5115470af39c"
	Dec 01 19:35:28 functional-415638 kubelet[5300]: E1201 19:35:28.897329    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-pvb47" podUID="a042cd3f-1f1a-433f-80cb-b05f1c3a53d0"
	Dec 01 19:35:36 functional-415638 kubelet[5300]: E1201 19:35:36.896898    5300 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-ndpz2" containerName="kubernetes-dashboard"
	Dec 01 19:35:42 functional-415638 kubelet[5300]: E1201 19:35:42.897010    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-f65v5" podUID="c4093869-0490-4a13-9d6d-5115470af39c"
	Dec 01 19:35:43 functional-415638 kubelet[5300]: E1201 19:35:43.895918    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-pvb47" podUID="a042cd3f-1f1a-433f-80cb-b05f1c3a53d0"
	Dec 01 19:35:44 functional-415638 kubelet[5300]: E1201 19:35:44.896182    5300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-415638" containerName="kube-apiserver"
	Dec 01 19:35:46 functional-415638 kubelet[5300]: E1201 19:35:46.896079    5300 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-s74gd" containerName="coredns"
	Dec 01 19:35:56 functional-415638 kubelet[5300]: E1201 19:35:56.897190    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-f65v5" podUID="c4093869-0490-4a13-9d6d-5115470af39c"
	Dec 01 19:35:56 functional-415638 kubelet[5300]: E1201 19:35:56.897536    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-pvb47" podUID="a042cd3f-1f1a-433f-80cb-b05f1c3a53d0"
	Dec 01 19:35:57 functional-415638 kubelet[5300]: E1201 19:35:57.895824    5300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-415638" containerName="kube-scheduler"
	Dec 01 19:36:03 functional-415638 kubelet[5300]: E1201 19:36:03.896235    5300 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-944gt" containerName="dashboard-metrics-scraper"
	Dec 01 19:36:07 functional-415638 kubelet[5300]: E1201 19:36:07.895627    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-f65v5" podUID="c4093869-0490-4a13-9d6d-5115470af39c"
	Dec 01 19:36:08 functional-415638 kubelet[5300]: E1201 19:36:08.895956    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-pvb47" podUID="a042cd3f-1f1a-433f-80cb-b05f1c3a53d0"
	Dec 01 19:36:09 functional-415638 kubelet[5300]: E1201 19:36:09.895280    5300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-415638" containerName="kube-controller-manager"
	Dec 01 19:36:22 functional-415638 kubelet[5300]: E1201 19:36:22.896764    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-f65v5" podUID="c4093869-0490-4a13-9d6d-5115470af39c"
	Dec 01 19:36:23 functional-415638 kubelet[5300]: E1201 19:36:23.895860    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-pvb47" podUID="a042cd3f-1f1a-433f-80cb-b05f1c3a53d0"
	Dec 01 19:36:33 functional-415638 kubelet[5300]: E1201 19:36:33.896152    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-f65v5" podUID="c4093869-0490-4a13-9d6d-5115470af39c"
	Dec 01 19:36:36 functional-415638 kubelet[5300]: E1201 19:36:36.896798    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-pvb47" podUID="a042cd3f-1f1a-433f-80cb-b05f1c3a53d0"
	Dec 01 19:36:37 functional-415638 kubelet[5300]: E1201 19:36:37.895497    5300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-415638" containerName="etcd"
	Dec 01 19:36:38 functional-415638 kubelet[5300]: E1201 19:36:38.896337    5300 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-ndpz2" containerName="kubernetes-dashboard"
	Dec 01 19:36:48 functional-415638 kubelet[5300]: E1201 19:36:48.895825    5300 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-s74gd" containerName="coredns"
	Dec 01 19:36:48 functional-415638 kubelet[5300]: E1201 19:36:48.896210    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-9f67c86d4-f65v5" podUID="c4093869-0490-4a13-9d6d-5115470af39c"
	Dec 01 19:36:48 functional-415638 kubelet[5300]: E1201 19:36:48.896518    5300 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-5758569b79-pvb47" podUID="a042cd3f-1f1a-433f-80cb-b05f1c3a53d0"
	
	
	==> kubernetes-dashboard [e14aed35b37aabbd1f0e4007d282c700942e2d186ea8053ea91e8dee491a8334] <==
	2025/12/01 19:26:55 Using namespace: kubernetes-dashboard
	2025/12/01 19:26:55 Using in-cluster config to connect to apiserver
	2025/12/01 19:26:55 Using secret token for csrf signing
	2025/12/01 19:26:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/01 19:26:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/01 19:26:55 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/01 19:26:55 Generating JWE encryption key
	2025/12/01 19:26:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/01 19:26:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/01 19:26:55 Initializing JWE encryption key from synchronized object
	2025/12/01 19:26:55 Creating in-cluster Sidecar client
	2025/12/01 19:26:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/01 19:26:55 Serving insecurely on HTTP port: 9090
	2025/12/01 19:27:25 Successful request to sidecar
	2025/12/01 19:26:55 Starting overwatch
	
	
	==> storage-provisioner [497effa1a3de220544b01f57c71899f97db0a0ec549686154d39b458cb746c80] <==
	W1201 19:36:33.055397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:35.057998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:35.061996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:37.065037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:37.068828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:39.071807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:39.076461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:41.079357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:41.085640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:43.089121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:43.092944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:45.095900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:45.099760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:47.103308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:47.106990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:49.109692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:49.113119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:51.116817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:51.120823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:53.124264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:53.127968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:55.130554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:55.135185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:57.137741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:36:57.142813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7c83f190a983bfe1edc96b5ea7c0d797750eb30e637642f91cf0754c395e2a0b] <==
	W1201 19:25:13.752699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:13.756326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 19:25:13.850869       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-415638_297c6960-93de-463b-9821-505b5c3b8b15!
	W1201 19:25:15.758742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:15.762427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:17.765660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:17.770040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:19.773095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:19.778274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:21.781715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:21.785195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:23.788947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:23.793847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:25.796800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:25.801512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:27.804948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:27.808925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:29.811990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:29.815784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:31.818581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:31.823550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:33.826622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:33.830258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:35.833370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:25:35.837046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-415638 -n functional-415638
helpers_test.go:269: (dbg) Run:  kubectl --context functional-415638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-pvb47 hello-node-connect-9f67c86d4-f65v5
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-415638 describe pod busybox-mount hello-node-5758569b79-pvb47 hello-node-connect-9f67c86d4-f65v5
helpers_test.go:290: (dbg) kubectl --context functional-415638 describe pod busybox-mount hello-node-5758569b79-pvb47 hello-node-connect-9f67c86d4-f65v5:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-415638/192.168.49.2
	Start Time:       Mon, 01 Dec 2025 19:26:41 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  mount-munger:
	    Container ID:  cri-o://0b7a646747ff53fd47cec9d48582d73ec176b561af3c15d680e77bccee0dda53
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 01 Dec 2025 19:26:43 +0000
	      Finished:     Mon, 01 Dec 2025 19:26:43 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kqbvj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kqbvj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-415638
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.324s (1.324s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Container created
	  Normal  Started    10m   kubelet            Container started
	
	
	Name:             hello-node-5758569b79-pvb47
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-415638/192.168.49.2
	Start Time:       Mon, 01 Dec 2025 19:26:39 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.3
	IPs:
	  IP:           10.244.0.3
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ft6wt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ft6wt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-5758569b79-pvb47 to functional-415638
	  Normal   Pulling    7m10s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m10s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m10s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     10s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-9f67c86d4-f65v5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-415638/192.168.49.2
	Start Time:       Mon, 01 Dec 2025 19:26:55 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mjl84 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mjl84:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-f65v5 to functional-415638
	  Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m59s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m59s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-415638 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-415638 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-pvb47" [a042cd3f-1f1a-433f-80cb-b05f1c3a53d0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-415638 -n functional-415638
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-01 19:36:39.599899413 +0000 UTC m=+1852.387964014
functional_test.go:1460: (dbg) Run:  kubectl --context functional-415638 describe po hello-node-5758569b79-pvb47 -n default
functional_test.go:1460: (dbg) kubectl --context functional-415638 describe po hello-node-5758569b79-pvb47 -n default:
Name:             hello-node-5758569b79-pvb47
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-415638/192.168.49.2
Start Time:       Mon, 01 Dec 2025 19:26:39 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.3
IPs:
IP:           10.244.0.3
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ft6wt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ft6wt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-5758569b79-pvb47 to functional-415638
Normal   Pulling    6m51s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m51s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m51s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-415638 logs hello-node-5758569b79-pvb47 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-415638 logs hello-node-5758569b79-pvb47 -n default: exit status 1 (64.900627ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-pvb47" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-415638 logs hello-node-5758569b79-pvb47 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image load --daemon kicbase/echo-server:functional-415638 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-415638" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image load --daemon kicbase/echo-server:functional-415638 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-415638" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-415638
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image load --daemon kicbase/echo-server:functional-415638 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-415638" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image save kicbase/echo-server:functional-415638 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1201 19:26:45.450201   75880 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:26:45.450671   75880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:26:45.450684   75880 out.go:374] Setting ErrFile to fd 2...
	I1201 19:26:45.450689   75880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:26:45.450915   75880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:26:45.451461   75880 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 19:26:45.451550   75880 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 19:26:45.451945   75880 cli_runner.go:164] Run: docker container inspect functional-415638 --format={{.State.Status}}
	I1201 19:26:45.471234   75880 ssh_runner.go:195] Run: systemctl --version
	I1201 19:26:45.471281   75880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-415638
	I1201 19:26:45.488424   75880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/functional-415638/id_rsa Username:docker}
	I1201 19:26:45.586061   75880 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1201 19:26:45.586149   75880 cache_images.go:255] Failed to load cached images for "functional-415638": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1201 19:26:45.586190   75880 cache_images.go:267] failed pushing to: functional-415638

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-415638
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image save --daemon kicbase/echo-server:functional-415638 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-415638
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-415638: exit status 1 (19.778093ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-415638

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-415638

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 service --namespace=default --https --url hello-node: exit status 115 (534.109424ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31314
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-415638 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 service hello-node --url --format={{.IP}}: exit status 115 (532.517777ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-415638 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 service hello-node --url: exit status 115 (532.950484ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31314
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-415638 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31314
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.07s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-779728 --output=json --user=testUser
E1201 19:47:06.983548   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-779728 --output=json --user=testUser: exit status 80 (2.067609659s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5cdb08f1-77d4-44e2-82e3-c9bede7d0911","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-779728 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"29000b1a-dca6-4b65-85c1-f883f8362cb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-01T19:47:08Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"35a3cae0-6f93-4b6d-9e61-f778e47f07de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-779728 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.07s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.87s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-779728 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-779728 --output=json --user=testUser: exit status 80 (1.871020437s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"60fcd53a-3eea-4c21-9a5d-579928a15dd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-779728 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"c2c529c1-c443-48e3-9fae-c633f627c981","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-01T19:47:10Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"6f956323-8920-4a5e-a0b5-a7c54eb7b363","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-779728 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.87s)

                                                
                                    
x
+
TestPause/serial/Pause (5.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-138480 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-138480 --alsologtostderr -v=5: exit status 80 (2.343314368s)

                                                
                                                
-- stdout --
	* Pausing node pause-138480 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:01:54.195532  259813 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:01:54.195651  259813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:01:54.195659  259813 out.go:374] Setting ErrFile to fd 2...
	I1201 20:01:54.195667  259813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:01:54.196061  259813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:01:54.196426  259813 out.go:368] Setting JSON to false
	I1201 20:01:54.196447  259813 mustload.go:66] Loading cluster: pause-138480
	I1201 20:01:54.196842  259813 config.go:182] Loaded profile config "pause-138480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:01:54.197265  259813 cli_runner.go:164] Run: docker container inspect pause-138480 --format={{.State.Status}}
	I1201 20:01:54.216775  259813 host.go:66] Checking if "pause-138480" exists ...
	I1201 20:01:54.217099  259813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:01:54.276527  259813 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:86 OomKillDisable:false NGoroutines:94 SystemTime:2025-12-01 20:01:54.265733487 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:01:54.277316  259813 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764600683-21997/minikube-v1.37.0-1764600683-21997-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764600683-21997-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-138480 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1201 20:01:54.279194  259813 out.go:179] * Pausing node pause-138480 ... 
	I1201 20:01:54.280301  259813 host.go:66] Checking if "pause-138480" exists ...
	I1201 20:01:54.280593  259813 ssh_runner.go:195] Run: systemctl --version
	I1201 20:01:54.280633  259813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:54.298917  259813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/pause-138480/id_rsa Username:docker}
	I1201 20:01:54.396970  259813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:01:54.409723  259813 pause.go:52] kubelet running: true
	I1201 20:01:54.409800  259813 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:01:54.540933  259813 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:01:54.541029  259813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:01:54.606737  259813 cri.go:89] found id: "147ea3d08b8b4e2b0312515352191f9268f8a7433d6a7d8bb19746f78b54f14a"
	I1201 20:01:54.606755  259813 cri.go:89] found id: "ad04cccc2b8167eae9f5df2b23c89f5badefb46914c63c3aa5b16de977a75f91"
	I1201 20:01:54.606760  259813 cri.go:89] found id: "b4dadfb6ece008e12b30cb292e823857812381b746509a5ce67d54a76e5a7412"
	I1201 20:01:54.606763  259813 cri.go:89] found id: "374d5f1391210ce8d3c2aa9df74962c74263e93470f5829282d7d0fe1a2abf78"
	I1201 20:01:54.606766  259813 cri.go:89] found id: "0158754becba52c503e34cf4d84eb782e05f945cfd408718af2628825910395c"
	I1201 20:01:54.606769  259813 cri.go:89] found id: "2b9385f7318cad2b2fde74e9cb42911f55fd46226bd147e49c2db9e0a2670327"
	I1201 20:01:54.606772  259813 cri.go:89] found id: "593a14807bb0c68571bd6d5ece24497643fb13fa1416316ff415bae42a99b78c"
	I1201 20:01:54.606775  259813 cri.go:89] found id: ""
	I1201 20:01:54.606809  259813 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:01:54.618611  259813 retry.go:31] will retry after 299.812356ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:01:54Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:01:54.919197  259813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:01:54.932164  259813 pause.go:52] kubelet running: false
	I1201 20:01:54.932224  259813 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:01:55.044736  259813 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:01:55.044796  259813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:01:55.112675  259813 cri.go:89] found id: "147ea3d08b8b4e2b0312515352191f9268f8a7433d6a7d8bb19746f78b54f14a"
	I1201 20:01:55.112701  259813 cri.go:89] found id: "ad04cccc2b8167eae9f5df2b23c89f5badefb46914c63c3aa5b16de977a75f91"
	I1201 20:01:55.112709  259813 cri.go:89] found id: "b4dadfb6ece008e12b30cb292e823857812381b746509a5ce67d54a76e5a7412"
	I1201 20:01:55.112714  259813 cri.go:89] found id: "374d5f1391210ce8d3c2aa9df74962c74263e93470f5829282d7d0fe1a2abf78"
	I1201 20:01:55.112719  259813 cri.go:89] found id: "0158754becba52c503e34cf4d84eb782e05f945cfd408718af2628825910395c"
	I1201 20:01:55.112723  259813 cri.go:89] found id: "2b9385f7318cad2b2fde74e9cb42911f55fd46226bd147e49c2db9e0a2670327"
	I1201 20:01:55.112727  259813 cri.go:89] found id: "593a14807bb0c68571bd6d5ece24497643fb13fa1416316ff415bae42a99b78c"
	I1201 20:01:55.112731  259813 cri.go:89] found id: ""
	I1201 20:01:55.112777  259813 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:01:55.124566  259813 retry.go:31] will retry after 319.639328ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:01:55Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:01:55.445120  259813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:01:55.458101  259813 pause.go:52] kubelet running: false
	I1201 20:01:55.458168  259813 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:01:55.572736  259813 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:01:55.572822  259813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:01:55.646974  259813 cri.go:89] found id: "147ea3d08b8b4e2b0312515352191f9268f8a7433d6a7d8bb19746f78b54f14a"
	I1201 20:01:55.646998  259813 cri.go:89] found id: "ad04cccc2b8167eae9f5df2b23c89f5badefb46914c63c3aa5b16de977a75f91"
	I1201 20:01:55.647005  259813 cri.go:89] found id: "b4dadfb6ece008e12b30cb292e823857812381b746509a5ce67d54a76e5a7412"
	I1201 20:01:55.647010  259813 cri.go:89] found id: "374d5f1391210ce8d3c2aa9df74962c74263e93470f5829282d7d0fe1a2abf78"
	I1201 20:01:55.647014  259813 cri.go:89] found id: "0158754becba52c503e34cf4d84eb782e05f945cfd408718af2628825910395c"
	I1201 20:01:55.647018  259813 cri.go:89] found id: "2b9385f7318cad2b2fde74e9cb42911f55fd46226bd147e49c2db9e0a2670327"
	I1201 20:01:55.647022  259813 cri.go:89] found id: "593a14807bb0c68571bd6d5ece24497643fb13fa1416316ff415bae42a99b78c"
	I1201 20:01:55.647027  259813 cri.go:89] found id: ""
	I1201 20:01:55.647076  259813 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:01:55.661513  259813 retry.go:31] will retry after 591.059511ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:01:55Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:01:56.252799  259813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:01:56.267435  259813 pause.go:52] kubelet running: false
	I1201 20:01:56.267498  259813 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:01:56.385931  259813 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:01:56.386019  259813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:01:56.458930  259813 cri.go:89] found id: "147ea3d08b8b4e2b0312515352191f9268f8a7433d6a7d8bb19746f78b54f14a"
	I1201 20:01:56.458955  259813 cri.go:89] found id: "ad04cccc2b8167eae9f5df2b23c89f5badefb46914c63c3aa5b16de977a75f91"
	I1201 20:01:56.458961  259813 cri.go:89] found id: "b4dadfb6ece008e12b30cb292e823857812381b746509a5ce67d54a76e5a7412"
	I1201 20:01:56.458967  259813 cri.go:89] found id: "374d5f1391210ce8d3c2aa9df74962c74263e93470f5829282d7d0fe1a2abf78"
	I1201 20:01:56.458970  259813 cri.go:89] found id: "0158754becba52c503e34cf4d84eb782e05f945cfd408718af2628825910395c"
	I1201 20:01:56.458973  259813 cri.go:89] found id: "2b9385f7318cad2b2fde74e9cb42911f55fd46226bd147e49c2db9e0a2670327"
	I1201 20:01:56.458976  259813 cri.go:89] found id: "593a14807bb0c68571bd6d5ece24497643fb13fa1416316ff415bae42a99b78c"
	I1201 20:01:56.458979  259813 cri.go:89] found id: ""
	I1201 20:01:56.459024  259813 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:01:56.473229  259813 out.go:203] 
	W1201 20:01:56.474368  259813 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:01:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:01:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:01:56.474383  259813 out.go:285] * 
	* 
	W1201 20:01:56.478370  259813 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:01:56.479706  259813 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-138480 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-138480
helpers_test.go:243: (dbg) docker inspect pause-138480:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54",
	        "Created": "2025-12-01T20:01:09.106126046Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 247254,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:01:09.147235238Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54/hostname",
	        "HostsPath": "/var/lib/docker/containers/ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54/hosts",
	        "LogPath": "/var/lib/docker/containers/ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54/ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54-json.log",
	        "Name": "/pause-138480",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-138480:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-138480",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54",
	                "LowerDir": "/var/lib/docker/overlay2/52d2b0871669b8088b37ecfc73a93f318dff37988038668b3944d3fd779d39d2-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/52d2b0871669b8088b37ecfc73a93f318dff37988038668b3944d3fd779d39d2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/52d2b0871669b8088b37ecfc73a93f318dff37988038668b3944d3fd779d39d2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/52d2b0871669b8088b37ecfc73a93f318dff37988038668b3944d3fd779d39d2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-138480",
	                "Source": "/var/lib/docker/volumes/pause-138480/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-138480",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-138480",
	                "name.minikube.sigs.k8s.io": "pause-138480",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "acb5d8d2d5a46a1df52bc5da01069eaab6e7dd126f11b3f989351a4e67f3632f",
	            "SandboxKey": "/var/run/docker/netns/acb5d8d2d5a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33038"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33039"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33042"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33040"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33041"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-138480": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40337cb2309f7a08534147b352cae4845d06a26f7aa8a750c539f4406f96f200",
	                    "EndpointID": "a45011e4a674d125dea1f0e62efa0e1c20069db3c81342f14abc75396775916b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f2:7e:d9:09:4c:f2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-138480",
	                        "ba1063f28b44"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-138480 -n pause-138480
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-138480 -n pause-138480: exit status 2 (332.117298ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-138480 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-551864 sudo systemctl cat crio --no-pager                                                                                                                                                                       │ cilium-551864             │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │                     │
	│ ssh     │ -p cilium-551864 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                             │ cilium-551864             │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │                     │
	│ ssh     │ -p NoKubernetes-684883 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-684883       │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │                     │
	│ ssh     │ -p cilium-551864 sudo crio config                                                                                                                                                                                         │ cilium-551864             │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │                     │
	│ delete  │ -p cilium-551864                                                                                                                                                                                                          │ cilium-551864             │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:00 UTC │
	│ start   │ -p force-systemd-flag-882623 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-882623 │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:00 UTC │
	│ delete  │ -p NoKubernetes-684883                                                                                                                                                                                                    │ NoKubernetes-684883       │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:00 UTC │
	│ delete  │ -p missing-upgrade-675228                                                                                                                                                                                                 │ missing-upgrade-675228    │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:00 UTC │
	│ start   │ -p cert-expiration-453210 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-453210    │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:01 UTC │
	│ start   │ -p cert-options-488320 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-488320       │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:01 UTC │
	│ delete  │ -p force-systemd-env-457376                                                                                                                                                                                               │ force-systemd-env-457376  │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:00 UTC │
	│ start   │ -p kubernetes-upgrade-189963 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-189963 │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:01 UTC │
	│ ssh     │ force-systemd-flag-882623 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-882623 │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:01 UTC │
	│ delete  │ -p force-systemd-flag-882623                                                                                                                                                                                              │ force-systemd-flag-882623 │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ start   │ -p pause-138480 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-138480              │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ ssh     │ cert-options-488320 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-488320       │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ ssh     │ -p cert-options-488320 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-488320       │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ delete  │ -p cert-options-488320                                                                                                                                                                                                    │ cert-options-488320       │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ start   │ -p stopped-upgrade-533630 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-533630    │ jenkins │ v1.35.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ stop    │ -p kubernetes-upgrade-189963                                                                                                                                                                                              │ kubernetes-upgrade-189963 │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ start   │ -p kubernetes-upgrade-189963 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                           │ kubernetes-upgrade-189963 │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │                     │
	│ stop    │ stopped-upgrade-533630 stop                                                                                                                                                                                               │ stopped-upgrade-533630    │ jenkins │ v1.35.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ start   │ -p stopped-upgrade-533630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-533630    │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │                     │
	│ start   │ -p pause-138480 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-138480              │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ pause   │ -p pause-138480 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-138480              │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:01:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:01:48.296496  258591 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:01:48.296803  258591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:01:48.296813  258591 out.go:374] Setting ErrFile to fd 2...
	I1201 20:01:48.296820  258591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:01:48.297108  258591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:01:48.297662  258591 out.go:368] Setting JSON to false
	I1201 20:01:48.299168  258591 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6259,"bootTime":1764613049,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:01:48.299241  258591 start.go:143] virtualization: kvm guest
	I1201 20:01:48.301345  258591 out.go:179] * [pause-138480] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:01:48.302636  258591 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:01:48.302650  258591 notify.go:221] Checking for updates...
	I1201 20:01:48.304985  258591 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:01:48.306108  258591 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:01:48.307465  258591 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:01:48.308682  258591 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:01:48.309898  258591 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:01:48.311695  258591 config.go:182] Loaded profile config "pause-138480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:01:48.312495  258591 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:01:48.337903  258591 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:01:48.338055  258591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:01:48.400909  258591 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:86 OomKillDisable:false NGoroutines:94 SystemTime:2025-12-01 20:01:48.389745621 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:01:48.401007  258591 docker.go:319] overlay module found
	I1201 20:01:48.403059  258591 out.go:179] * Using the docker driver based on existing profile
	I1201 20:01:48.404643  258591 start.go:309] selected driver: docker
	I1201 20:01:48.404661  258591 start.go:927] validating driver "docker" against &{Name:pause-138480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-138480 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:01:48.404819  258591 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:01:48.404918  258591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:01:48.467871  258591 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:86 OomKillDisable:false NGoroutines:94 SystemTime:2025-12-01 20:01:48.457393488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:01:48.468750  258591 cni.go:84] Creating CNI manager for ""
	I1201 20:01:48.468832  258591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:01:48.468896  258591 start.go:353] cluster config:
	{Name:pause-138480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-138480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:01:48.471073  258591 out.go:179] * Starting "pause-138480" primary control-plane node in "pause-138480" cluster
	I1201 20:01:48.472590  258591 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:01:48.474003  258591 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:01:48.477839  258591 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:01:48.477900  258591 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 20:01:48.477910  258591 cache.go:65] Caching tarball of preloaded images
	I1201 20:01:48.478003  258591 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:01:48.478013  258591 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:01:48.478073  258591 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:01:48.478480  258591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/config.json ...
	I1201 20:01:48.502668  258591 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:01:48.502690  258591 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:01:48.502711  258591 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:01:48.502744  258591 start.go:360] acquireMachinesLock for pause-138480: {Name:mk727318775028220acac14a603629c3450ad024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:01:48.502808  258591 start.go:364] duration metric: took 42.707µs to acquireMachinesLock for "pause-138480"
	I1201 20:01:48.502831  258591 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:01:48.502841  258591 fix.go:54] fixHost starting: 
	I1201 20:01:48.503071  258591 cli_runner.go:164] Run: docker container inspect pause-138480 --format={{.State.Status}}
	I1201 20:01:48.520963  258591 fix.go:112] recreateIfNeeded on pause-138480: state=Running err=<nil>
	W1201 20:01:48.521016  258591 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 20:01:46.247093  256706 cli_runner.go:164] Run: docker network inspect stopped-upgrade-533630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:01:46.265466  256706 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1201 20:01:46.269534  256706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:01:46.281148  256706 kubeadm.go:884] updating cluster {Name:stopped-upgrade-533630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-533630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:01:46.281274  256706 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1201 20:01:46.281351  256706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:01:46.325641  256706 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:01:46.325660  256706 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:01:46.325700  256706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:01:46.360927  256706 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:01:46.360949  256706 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:01:46.360957  256706 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.32.0 crio true true} ...
	I1201 20:01:46.361048  256706 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-533630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-533630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:01:46.361110  256706 ssh_runner.go:195] Run: crio config
	I1201 20:01:46.407338  256706 cni.go:84] Creating CNI manager for ""
	I1201 20:01:46.407367  256706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:01:46.407390  256706 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:01:46.407420  256706 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-533630 NodeName:stopped-upgrade-533630 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:01:46.407584  256706 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-533630"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:01:46.407662  256706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1201 20:01:46.417322  256706 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:01:46.417387  256706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:01:46.426190  256706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1201 20:01:46.446143  256706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:01:46.466565  256706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1201 20:01:46.485654  256706 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:01:46.489522  256706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:01:46.501673  256706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:01:46.580090  256706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:01:46.607063  256706 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630 for IP: 192.168.103.2
	I1201 20:01:46.607084  256706 certs.go:195] generating shared ca certs ...
	I1201 20:01:46.607099  256706 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:01:46.607244  256706 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:01:46.607326  256706 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:01:46.607345  256706 certs.go:257] generating profile certs ...
	I1201 20:01:46.607426  256706 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/client.key
	I1201 20:01:46.607484  256706 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/apiserver.key.652c7607
	I1201 20:01:46.607525  256706 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/proxy-client.key
	I1201 20:01:46.607630  256706 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:01:46.607661  256706 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:01:46.607670  256706 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:01:46.607694  256706 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:01:46.607721  256706 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:01:46.607745  256706 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:01:46.607803  256706 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:01:46.608388  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:01:46.635839  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:01:46.663714  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:01:46.697100  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:01:46.722036  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1201 20:01:46.746300  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1201 20:01:46.771548  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:01:46.798105  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 20:01:46.824195  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:01:46.849745  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:01:46.874599  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:01:46.900088  256706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:01:46.918033  256706 ssh_runner.go:195] Run: openssl version
	I1201 20:01:46.923681  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:01:46.933886  256706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:01:46.937424  256706 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:01:46.937475  256706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:01:46.944328  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:01:46.953755  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:01:46.964027  256706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:01:46.967801  256706 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:01:46.967847  256706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:01:46.975270  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:01:46.985118  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:01:46.995807  256706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:01:46.999421  256706 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:01:46.999501  256706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:01:47.007007  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:01:47.016429  256706 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:01:47.020123  256706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:01:47.026953  256706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:01:47.033500  256706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:01:47.040007  256706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:01:47.046897  256706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:01:47.053614  256706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:01:47.060520  256706 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-533630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-533630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:01:47.060611  256706 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:01:47.060654  256706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:01:47.096234  256706 cri.go:89] found id: ""
	I1201 20:01:47.096330  256706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:01:47.106763  256706 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:01:47.106782  256706 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:01:47.106827  256706 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:01:47.116526  256706 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:01:47.117198  256706 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-533630" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:01:47.117625  256706 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-533630" cluster setting kubeconfig missing "stopped-upgrade-533630" context setting]
	I1201 20:01:47.118208  256706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:01:47.118959  256706 kapi.go:59] client config for stopped-upgrade-533630: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/client.key", CAFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 20:01:47.119404  256706 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 20:01:47.119421  256706 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 20:01:47.119425  256706 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 20:01:47.119429  256706 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 20:01:47.119433  256706 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 20:01:47.119757  256706 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:01:47.130741  256706 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-01 20:01:24.135159207 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-01 20:01:46.482506519 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1201 20:01:47.130763  256706 kubeadm.go:1161] stopping kube-system containers ...
	I1201 20:01:47.130776  256706 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1201 20:01:47.130826  256706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:01:47.172705  256706 cri.go:89] found id: ""
	I1201 20:01:47.172772  256706 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1201 20:01:47.241173  256706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:01:47.253047  256706 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5651 Dec  1 20:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Dec  1 20:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec  1 20:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Dec  1 20:01 /etc/kubernetes/scheduler.conf
	
	I1201 20:01:47.253166  256706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 20:01:47.263967  256706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 20:01:47.273607  256706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 20:01:47.286156  256706 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:01:47.286211  256706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:01:47.295367  256706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 20:01:47.304264  256706 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:01:47.304336  256706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:01:47.312879  256706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:01:47.322565  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:01:47.367141  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:01:48.375045  256706 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.00787221s)
	I1201 20:01:48.375110  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:01:48.572499  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:01:48.626357  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:01:48.679203  256706 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:01:48.679270  256706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:01:49.179460  256706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:01:49.196349  256706 api_server.go:72] duration metric: took 517.155078ms to wait for apiserver process to appear ...
	I1201 20:01:49.196379  256706 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:01:49.196567  256706 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1201 20:01:49.196954  256706 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1201 20:01:48.523159  258591 out.go:252] * Updating the running docker "pause-138480" container ...
	I1201 20:01:48.523189  258591 machine.go:94] provisionDockerMachine start ...
	I1201 20:01:48.523253  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:48.543167  258591 main.go:143] libmachine: Using SSH client type: native
	I1201 20:01:48.543726  258591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1201 20:01:48.543746  258591 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:01:48.691618  258591 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-138480
	
	I1201 20:01:48.691649  258591 ubuntu.go:182] provisioning hostname "pause-138480"
	I1201 20:01:48.691716  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:48.709561  258591 main.go:143] libmachine: Using SSH client type: native
	I1201 20:01:48.709773  258591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1201 20:01:48.709785  258591 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-138480 && echo "pause-138480" | sudo tee /etc/hostname
	I1201 20:01:48.859875  258591 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-138480
	
	I1201 20:01:48.859960  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:48.878060  258591 main.go:143] libmachine: Using SSH client type: native
	I1201 20:01:48.878279  258591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1201 20:01:48.878320  258591 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-138480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-138480/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-138480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:01:49.019925  258591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:01:49.019951  258591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:01:49.019989  258591 ubuntu.go:190] setting up certificates
	I1201 20:01:49.019999  258591 provision.go:84] configureAuth start
	I1201 20:01:49.020045  258591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-138480
	I1201 20:01:49.038016  258591 provision.go:143] copyHostCerts
	I1201 20:01:49.038086  258591 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:01:49.038096  258591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:01:49.038169  258591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:01:49.038334  258591 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:01:49.038346  258591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:01:49.038391  258591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:01:49.038469  258591 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:01:49.038477  258591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:01:49.038501  258591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:01:49.038565  258591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.pause-138480 san=[127.0.0.1 192.168.94.2 localhost minikube pause-138480]
	I1201 20:01:49.193048  258591 provision.go:177] copyRemoteCerts
	I1201 20:01:49.193117  258591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:01:49.193166  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:49.215321  258591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/pause-138480/id_rsa Username:docker}
	I1201 20:01:49.323235  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:01:49.343342  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1201 20:01:49.362720  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:01:49.382766  258591 provision.go:87] duration metric: took 362.751403ms to configureAuth
	I1201 20:01:49.382796  258591 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:01:49.382986  258591 config.go:182] Loaded profile config "pause-138480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:01:49.383082  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:49.404511  258591 main.go:143] libmachine: Using SSH client type: native
	I1201 20:01:49.404830  258591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1201 20:01:49.404853  258591 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:01:49.755936  258591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:01:49.755967  258591 machine.go:97] duration metric: took 1.232769592s to provisionDockerMachine
	I1201 20:01:49.755982  258591 start.go:293] postStartSetup for "pause-138480" (driver="docker")
	I1201 20:01:49.755994  258591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:01:49.756059  258591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:01:49.756130  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:49.779437  258591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/pause-138480/id_rsa Username:docker}
	I1201 20:01:49.883882  258591 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:01:49.887899  258591 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:01:49.887939  258591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:01:49.887953  258591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:01:49.888011  258591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:01:49.888118  258591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:01:49.888245  258591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:01:49.896446  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:01:49.914496  258591 start.go:296] duration metric: took 158.499772ms for postStartSetup
	I1201 20:01:49.914577  258591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:01:49.914621  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:49.935445  258591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/pause-138480/id_rsa Username:docker}
	I1201 20:01:50.033764  258591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:01:50.038866  258591 fix.go:56] duration metric: took 1.536019478s for fixHost
	I1201 20:01:50.038892  258591 start.go:83] releasing machines lock for "pause-138480", held for 1.53607067s
	I1201 20:01:50.038958  258591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-138480
	I1201 20:01:50.059411  258591 ssh_runner.go:195] Run: cat /version.json
	I1201 20:01:50.059462  258591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:01:50.059467  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:50.059520  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:50.080444  258591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/pause-138480/id_rsa Username:docker}
	I1201 20:01:50.080972  258591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/pause-138480/id_rsa Username:docker}
	I1201 20:01:50.231317  258591 ssh_runner.go:195] Run: systemctl --version
	I1201 20:01:50.237933  258591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:01:50.274035  258591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:01:50.278837  258591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:01:50.278898  258591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:01:50.287530  258591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:01:50.287555  258591 start.go:496] detecting cgroup driver to use...
	I1201 20:01:50.287581  258591 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:01:50.287619  258591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:01:50.302434  258591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:01:50.314445  258591 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:01:50.314504  258591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:01:50.329610  258591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:01:50.342663  258591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:01:50.452178  258591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:01:50.558753  258591 docker.go:234] disabling docker service ...
	I1201 20:01:50.558824  258591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:01:50.575244  258591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:01:50.587384  258591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:01:50.696355  258591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:01:50.805504  258591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:01:50.818727  258591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:01:50.833788  258591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:01:50.833840  258591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.843242  258591 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:01:50.843325  258591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.852894  258591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.861719  258591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.870459  258591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:01:50.878640  258591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.888266  258591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.897109  258591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.906013  258591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:01:50.913401  258591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:01:50.920525  258591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:01:51.028647  258591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:01:51.212541  258591 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:01:51.212606  258591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:01:51.216757  258591 start.go:564] Will wait 60s for crictl version
	I1201 20:01:51.216804  258591 ssh_runner.go:195] Run: which crictl
	I1201 20:01:51.220510  258591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:01:51.243345  258591 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:01:51.243433  258591 ssh_runner.go:195] Run: crio --version
	I1201 20:01:51.271118  258591 ssh_runner.go:195] Run: crio --version
	I1201 20:01:51.301210  258591 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 20:01:51.302480  258591 cli_runner.go:164] Run: docker network inspect pause-138480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:01:51.320387  258591 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1201 20:01:51.324851  258591 kubeadm.go:884] updating cluster {Name:pause-138480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-138480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:01:51.324982  258591 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:01:51.325032  258591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:01:51.358493  258591 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:01:51.358516  258591 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:01:51.358583  258591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:01:51.384824  258591 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:01:51.384845  258591 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:01:51.384851  258591 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1201 20:01:51.384941  258591 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-138480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-138480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:01:51.384997  258591 ssh_runner.go:195] Run: crio config
	I1201 20:01:51.429203  258591 cni.go:84] Creating CNI manager for ""
	I1201 20:01:51.429219  258591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:01:51.429232  258591 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:01:51.429252  258591 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-138480 NodeName:pause-138480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:01:51.429391  258591 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-138480"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:01:51.429464  258591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:01:51.437537  258591 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:01:51.437592  258591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:01:51.445591  258591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1201 20:01:51.458956  258591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:01:51.471771  258591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1201 20:01:51.484146  258591 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:01:51.488186  258591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:01:51.598857  258591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:01:51.612324  258591 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480 for IP: 192.168.94.2
	I1201 20:01:51.612347  258591 certs.go:195] generating shared ca certs ...
	I1201 20:01:51.612365  258591 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:01:51.612533  258591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:01:51.612595  258591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:01:51.612614  258591 certs.go:257] generating profile certs ...
	I1201 20:01:51.612719  258591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/client.key
	I1201 20:01:51.612800  258591 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/apiserver.key.2b63a7b4
	I1201 20:01:51.612854  258591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/proxy-client.key
	I1201 20:01:51.612988  258591 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:01:51.613033  258591 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:01:51.613047  258591 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:01:51.613098  258591 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:01:51.613134  258591 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:01:51.613170  258591 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:01:51.613232  258591 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:01:51.613874  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:01:51.632936  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:01:51.651712  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:01:51.669081  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:01:51.687196  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1201 20:01:51.707112  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1201 20:01:51.725579  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:01:51.743627  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:01:51.760984  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:01:51.779684  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:01:51.797590  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:01:51.815168  258591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:01:51.827923  258591 ssh_runner.go:195] Run: openssl version
	I1201 20:01:51.833892  258591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:01:51.843757  258591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:01:51.847970  258591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:01:51.848030  258591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:01:51.882427  258591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:01:51.891251  258591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:01:51.899999  258591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:01:51.903544  258591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:01:51.903588  258591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:01:51.937939  258591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:01:51.946465  258591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:01:51.955083  258591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:01:51.958873  258591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:01:51.958928  258591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:01:51.996390  258591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:01:52.005004  258591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:01:52.009202  258591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:01:52.044406  258591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:01:52.080009  258591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:01:52.116030  258591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:01:52.150503  258591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:01:52.186262  258591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:01:52.220810  258591 kubeadm.go:401] StartCluster: {Name:pause-138480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-138480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:01:52.220931  258591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:01:52.220987  258591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:01:52.248505  258591 cri.go:89] found id: "147ea3d08b8b4e2b0312515352191f9268f8a7433d6a7d8bb19746f78b54f14a"
	I1201 20:01:52.248525  258591 cri.go:89] found id: "ad04cccc2b8167eae9f5df2b23c89f5badefb46914c63c3aa5b16de977a75f91"
	I1201 20:01:52.248528  258591 cri.go:89] found id: "b4dadfb6ece008e12b30cb292e823857812381b746509a5ce67d54a76e5a7412"
	I1201 20:01:52.248535  258591 cri.go:89] found id: "374d5f1391210ce8d3c2aa9df74962c74263e93470f5829282d7d0fe1a2abf78"
	I1201 20:01:52.248538  258591 cri.go:89] found id: "0158754becba52c503e34cf4d84eb782e05f945cfd408718af2628825910395c"
	I1201 20:01:52.248541  258591 cri.go:89] found id: "2b9385f7318cad2b2fde74e9cb42911f55fd46226bd147e49c2db9e0a2670327"
	I1201 20:01:52.248543  258591 cri.go:89] found id: "593a14807bb0c68571bd6d5ece24497643fb13fa1416316ff415bae42a99b78c"
	I1201 20:01:52.248546  258591 cri.go:89] found id: ""
	I1201 20:01:52.248580  258591 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:01:52.259987  258591 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:01:52Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:01:52.260060  258591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:01:52.268404  258591 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:01:52.268420  258591 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:01:52.268456  258591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:01:52.275927  258591 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:01:52.276710  258591 kubeconfig.go:125] found "pause-138480" server: "https://192.168.94.2:8443"
	I1201 20:01:52.277744  258591 kapi.go:59] client config for pause-138480: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/client.key", CAFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 20:01:52.278153  258591 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 20:01:52.278170  258591 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 20:01:52.278175  258591 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 20:01:52.278182  258591 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 20:01:52.278192  258591 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 20:01:52.278507  258591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:01:52.286439  258591 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1201 20:01:52.286474  258591 kubeadm.go:602] duration metric: took 18.04795ms to restartPrimaryControlPlane
	I1201 20:01:52.286493  258591 kubeadm.go:403] duration metric: took 65.694022ms to StartCluster
	I1201 20:01:52.286510  258591 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:01:52.286578  258591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:01:52.287702  258591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:01:52.287944  258591 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:01:52.288006  258591 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:01:52.288185  258591 config.go:182] Loaded profile config "pause-138480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:01:52.290447  258591 out.go:179] * Verifying Kubernetes components...
	I1201 20:01:52.290451  258591 out.go:179] * Enabled addons: 
	I1201 20:01:52.291607  258591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:01:52.291614  258591 addons.go:530] duration metric: took 3.615223ms for enable addons: enabled=[]
	I1201 20:01:52.400926  258591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:01:52.414087  258591 node_ready.go:35] waiting up to 6m0s for node "pause-138480" to be "Ready" ...
	I1201 20:01:52.421569  258591 node_ready.go:49] node "pause-138480" is "Ready"
	I1201 20:01:52.421588  258591 node_ready.go:38] duration metric: took 7.453291ms for node "pause-138480" to be "Ready" ...
	I1201 20:01:52.421598  258591 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:01:52.421636  258591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:01:52.432636  258591 api_server.go:72] duration metric: took 144.662421ms to wait for apiserver process to appear ...
	I1201 20:01:52.432655  258591 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:01:52.432668  258591 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1201 20:01:52.436553  258591 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1201 20:01:52.437457  258591 api_server.go:141] control plane version: v1.34.2
	I1201 20:01:52.437476  258591 api_server.go:131] duration metric: took 4.816226ms to wait for apiserver health ...
	I1201 20:01:52.437484  258591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:01:52.440783  258591 system_pods.go:59] 7 kube-system pods found
	I1201 20:01:52.440806  258591 system_pods.go:61] "coredns-66bc5c9577-jshbx" [068fa400-c778-4f67-bfd3-486ad54de1b0] Running
	I1201 20:01:52.440811  258591 system_pods.go:61] "etcd-pause-138480" [a1f5bd7d-aae7-4872-ab98-9320a44d42c5] Running
	I1201 20:01:52.440815  258591 system_pods.go:61] "kindnet-vp7xw" [11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1] Running
	I1201 20:01:52.440818  258591 system_pods.go:61] "kube-apiserver-pause-138480" [102e70a9-fc2d-4d05-b477-a0acef9150be] Running
	I1201 20:01:52.440821  258591 system_pods.go:61] "kube-controller-manager-pause-138480" [a69a7a4f-bc07-49df-9e38-844936cb2885] Running
	I1201 20:01:52.440824  258591 system_pods.go:61] "kube-proxy-fsrnk" [8275c679-6ab0-4cf4-8499-44804b6c5e5f] Running
	I1201 20:01:52.440828  258591 system_pods.go:61] "kube-scheduler-pause-138480" [a7d1d4e0-5612-424c-be48-3f9af521b448] Running
	I1201 20:01:52.440833  258591 system_pods.go:74] duration metric: took 3.34381ms to wait for pod list to return data ...
	I1201 20:01:52.440839  258591 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:01:52.442876  258591 default_sa.go:45] found service account: "default"
	I1201 20:01:52.442898  258591 default_sa.go:55] duration metric: took 2.053979ms for default service account to be created ...
	I1201 20:01:52.442908  258591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:01:52.445509  258591 system_pods.go:86] 7 kube-system pods found
	I1201 20:01:52.445530  258591 system_pods.go:89] "coredns-66bc5c9577-jshbx" [068fa400-c778-4f67-bfd3-486ad54de1b0] Running
	I1201 20:01:52.445535  258591 system_pods.go:89] "etcd-pause-138480" [a1f5bd7d-aae7-4872-ab98-9320a44d42c5] Running
	I1201 20:01:52.445542  258591 system_pods.go:89] "kindnet-vp7xw" [11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1] Running
	I1201 20:01:52.445546  258591 system_pods.go:89] "kube-apiserver-pause-138480" [102e70a9-fc2d-4d05-b477-a0acef9150be] Running
	I1201 20:01:52.445549  258591 system_pods.go:89] "kube-controller-manager-pause-138480" [a69a7a4f-bc07-49df-9e38-844936cb2885] Running
	I1201 20:01:52.445552  258591 system_pods.go:89] "kube-proxy-fsrnk" [8275c679-6ab0-4cf4-8499-44804b6c5e5f] Running
	I1201 20:01:52.445555  258591 system_pods.go:89] "kube-scheduler-pause-138480" [a7d1d4e0-5612-424c-be48-3f9af521b448] Running
	I1201 20:01:52.445562  258591 system_pods.go:126] duration metric: took 2.648365ms to wait for k8s-apps to be running ...
	I1201 20:01:52.445572  258591 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:01:52.445612  258591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:01:52.458885  258591 system_svc.go:56] duration metric: took 13.30547ms WaitForService to wait for kubelet
	I1201 20:01:52.458909  258591 kubeadm.go:587] duration metric: took 170.937748ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:01:52.458928  258591 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:01:52.461796  258591 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:01:52.461819  258591 node_conditions.go:123] node cpu capacity is 8
	I1201 20:01:52.461830  258591 node_conditions.go:105] duration metric: took 2.895264ms to run NodePressure ...
	I1201 20:01:52.461840  258591 start.go:242] waiting for startup goroutines ...
	I1201 20:01:52.461846  258591 start.go:247] waiting for cluster config update ...
	I1201 20:01:52.461854  258591 start.go:256] writing updated cluster config ...
	I1201 20:01:52.462134  258591 ssh_runner.go:195] Run: rm -f paused
	I1201 20:01:52.465884  258591 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:01:52.466593  258591 kapi.go:59] client config for pause-138480: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/client.key", CAFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 20:01:52.469170  258591 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jshbx" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.473634  258591 pod_ready.go:94] pod "coredns-66bc5c9577-jshbx" is "Ready"
	I1201 20:01:52.473656  258591 pod_ready.go:86] duration metric: took 4.46582ms for pod "coredns-66bc5c9577-jshbx" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.475557  258591 pod_ready.go:83] waiting for pod "etcd-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.478961  258591 pod_ready.go:94] pod "etcd-pause-138480" is "Ready"
	I1201 20:01:52.478978  258591 pod_ready.go:86] duration metric: took 3.405491ms for pod "etcd-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.480712  258591 pod_ready.go:83] waiting for pod "kube-apiserver-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.484155  258591 pod_ready.go:94] pod "kube-apiserver-pause-138480" is "Ready"
	I1201 20:01:52.484174  258591 pod_ready.go:86] duration metric: took 3.444736ms for pod "kube-apiserver-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.485924  258591 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.869969  258591 pod_ready.go:94] pod "kube-controller-manager-pause-138480" is "Ready"
	I1201 20:01:52.869994  258591 pod_ready.go:86] duration metric: took 384.047726ms for pod "kube-controller-manager-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:53.070141  258591 pod_ready.go:83] waiting for pod "kube-proxy-fsrnk" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:53.470250  258591 pod_ready.go:94] pod "kube-proxy-fsrnk" is "Ready"
	I1201 20:01:53.470277  258591 pod_ready.go:86] duration metric: took 400.11172ms for pod "kube-proxy-fsrnk" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:53.670301  258591 pod_ready.go:83] waiting for pod "kube-scheduler-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:54.070387  258591 pod_ready.go:94] pod "kube-scheduler-pause-138480" is "Ready"
	I1201 20:01:54.070414  258591 pod_ready.go:86] duration metric: took 400.076561ms for pod "kube-scheduler-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:54.070429  258591 pod_ready.go:40] duration metric: took 1.60451867s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:01:54.112985  258591 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 20:01:54.115508  258591 out.go:179] * Done! kubectl is now configured to use "pause-138480" cluster and "default" namespace by default
	I1201 20:01:49.697448  256706 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.127647121Z" level=info msg="RDT not available in the host system"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.127658283Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.128502122Z" level=info msg="Conmon does support the --sync option"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.128518668Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.128531625Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.12923598Z" level=info msg="Conmon does support the --sync option"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.129248892Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.133028036Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.133058244Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.133647706Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.134031607Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.134090029Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.207569266Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-jshbx Namespace:kube-system ID:ef391e39fc796a54701f0831c2ef2ca23471f4892dc771bf09f273ea2ac68d12 UID:068fa400-c778-4f67-bfd3-486ad54de1b0 NetNS:/var/run/netns/cc31ca13-4e82-4dd1-9c8c-711fe40fcdf6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132ac0}] Aliases:map[]}"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.207798154Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-jshbx for CNI network kindnet (type=ptp)"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.20824342Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208270419Z" level=info msg="Starting seccomp notifier watcher"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208342058Z" level=info msg="Create NRI interface"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208474438Z" level=info msg="built-in NRI default validator is disabled"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.20848913Z" level=info msg="runtime interface created"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208502509Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208511453Z" level=info msg="runtime interface starting up..."
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208519282Z" level=info msg="starting plugins..."
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208534444Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208925316Z" level=info msg="No systemd watchdog enabled"
	Dec 01 20:01:51 pause-138480 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	147ea3d08b8b4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   ef391e39fc796       coredns-66bc5c9577-jshbx               kube-system
	ad04cccc2b816       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   23 seconds ago      Running             kube-proxy                0                   25c11b9a6413d       kube-proxy-fsrnk                       kube-system
	b4dadfb6ece00       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   3520acc84aaf5       kindnet-vp7xw                          kube-system
	374d5f1391210       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   33 seconds ago      Running             kube-scheduler            0                   5ddcfc14aa78a       kube-scheduler-pause-138480            kube-system
	0158754becba5       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   33 seconds ago      Running             kube-controller-manager   0                   a6a78829531d9       kube-controller-manager-pause-138480   kube-system
	2b9385f7318ca       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   33 seconds ago      Running             etcd                      0                   155fa5407ea25       etcd-pause-138480                      kube-system
	593a14807bb0c       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   33 seconds ago      Running             kube-apiserver            0                   59736cb10512c       kube-apiserver-pause-138480            kube-system
	
	
	==> coredns [147ea3d08b8b4e2b0312515352191f9268f8a7433d6a7d8bb19746f78b54f14a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43674 - 15459 "HINFO IN 34557710388052300.8662693174993530794. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.030180416s
	
	
	==> describe nodes <==
	Name:               pause-138480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-138480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=pause-138480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_01_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:01:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-138480
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:01:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:01:49 +0000   Mon, 01 Dec 2025 20:01:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:01:49 +0000   Mon, 01 Dec 2025 20:01:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:01:49 +0000   Mon, 01 Dec 2025 20:01:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:01:49 +0000   Mon, 01 Dec 2025 20:01:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-138480
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                efb0169e-3bf7-4721-9a62-4283b1d5ce1e
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-jshbx                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-pause-138480                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-vp7xw                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-pause-138480             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-pause-138480    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-fsrnk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-pause-138480             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node pause-138480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node pause-138480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node pause-138480 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node pause-138480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node pause-138480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node pause-138480 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node pause-138480 event: Registered Node pause-138480 in Controller
	  Normal  NodeReady                12s                kubelet          Node pause-138480 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091158] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023654] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.003803] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 1 19:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.060605] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023816] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023874] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +2.047751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +4.031647] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +8.063094] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[Dec 1 19:09] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[ +32.252518] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	
	
	==> etcd [2b9385f7318cad2b2fde74e9cb42911f55fd46226bd147e49c2db9e0a2670327] <==
	{"level":"warn","ts":"2025-12-01T20:01:25.111242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.119531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.128021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.139514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.148093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.157144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.168109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.181418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.189976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.198966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.207271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.233902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.241707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.251030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.260009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.268473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.277760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.284443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.302871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.312077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.329722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.334555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.345225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.355606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.418275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40198","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:01:57 up  1:44,  0 user,  load average: 5.00, 3.10, 1.89
	Linux pause-138480 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b4dadfb6ece008e12b30cb292e823857812381b746509a5ce67d54a76e5a7412] <==
	I1201 20:01:34.449923       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:01:34.450165       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1201 20:01:34.543718       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:01:34.543754       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:01:34.543771       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:01:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:01:34.654227       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:01:34.654343       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:01:34.654373       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:01:34.654531       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:01:35.044645       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:01:35.044684       1 metrics.go:72] Registering metrics
	I1201 20:01:35.044736       1 controller.go:711] "Syncing nftables rules"
	I1201 20:01:44.653785       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:01:44.653833       1 main.go:301] handling current node
	I1201 20:01:54.655770       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:01:54.655881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [593a14807bb0c68571bd6d5ece24497643fb13fa1416316ff415bae42a99b78c] <==
	I1201 20:01:25.985803       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1201 20:01:25.985835       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1201 20:01:25.987390       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:01:25.987427       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1201 20:01:25.991866       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1201 20:01:25.992416       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:01:25.994069       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1201 20:01:26.195592       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:01:26.886658       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1201 20:01:26.890250       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1201 20:01:26.890270       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:01:27.353462       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:01:27.387615       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:01:27.492753       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1201 20:01:27.498941       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1201 20:01:27.499992       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:01:27.504271       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:01:27.937391       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:01:28.602550       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:01:28.612326       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1201 20:01:28.619967       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1201 20:01:33.791352       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:01:33.842997       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1201 20:01:33.891873       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:01:33.895713       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [0158754becba52c503e34cf4d84eb782e05f945cfd408718af2628825910395c] <==
	I1201 20:01:32.985843       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:01:32.985860       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1201 20:01:32.985869       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1201 20:01:32.986094       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1201 20:01:32.987143       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1201 20:01:32.987162       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1201 20:01:32.987189       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1201 20:01:32.987211       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1201 20:01:32.987239       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1201 20:01:32.987313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1201 20:01:32.987389       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1201 20:01:32.987410       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1201 20:01:32.987660       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1201 20:01:32.987717       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1201 20:01:32.987756       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1201 20:01:32.988876       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1201 20:01:32.988921       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1201 20:01:32.988982       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1201 20:01:32.989043       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-138480"
	I1201 20:01:32.989125       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1201 20:01:32.992710       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1201 20:01:32.995141       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1201 20:01:32.997876       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-138480" podCIDRs=["10.244.0.0/24"]
	I1201 20:01:33.006793       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:01:48.133236       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ad04cccc2b8167eae9f5df2b23c89f5badefb46914c63c3aa5b16de977a75f91] <==
	I1201 20:01:34.317955       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:01:34.387867       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 20:01:34.488551       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 20:01:34.488605       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1201 20:01:34.488709       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:01:34.511050       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:01:34.511122       1 server_linux.go:132] "Using iptables Proxier"
	I1201 20:01:34.516660       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:01:34.517026       1 server.go:527] "Version info" version="v1.34.2"
	I1201 20:01:34.517062       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:01:34.519032       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:01:34.519058       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:01:34.519076       1 config.go:309] "Starting node config controller"
	I1201 20:01:34.519096       1 config.go:200] "Starting service config controller"
	I1201 20:01:34.519103       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:01:34.519109       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:01:34.519101       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:01:34.519109       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:01:34.519120       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:01:34.619944       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:01:34.619968       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:01:34.619990       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [374d5f1391210ce8d3c2aa9df74962c74263e93470f5829282d7d0fe1a2abf78] <==
	E1201 20:01:25.949005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 20:01:25.949087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 20:01:25.949147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 20:01:25.949188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 20:01:25.949605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 20:01:25.949352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 20:01:25.949685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 20:01:25.949737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 20:01:25.949801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1201 20:01:25.949845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 20:01:25.949888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 20:01:25.949987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 20:01:25.949893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 20:01:25.950558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 20:01:26.847959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 20:01:26.914104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 20:01:26.923238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 20:01:27.006056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 20:01:27.017256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 20:01:27.070856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 20:01:27.070856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 20:01:27.155634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 20:01:27.188817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 20:01:27.384789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1201 20:01:29.844837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 20:01:29 pause-138480 kubelet[1300]: E1201 20:01:29.473822    1300 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-138480\" already exists" pod="kube-system/kube-apiserver-pause-138480"
	Dec 01 20:01:29 pause-138480 kubelet[1300]: I1201 20:01:29.521509    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-138480" podStartSLOduration=1.521484188 podStartE2EDuration="1.521484188s" podCreationTimestamp="2025-12-01 20:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:29.510158502 +0000 UTC m=+1.158059926" watchObservedRunningTime="2025-12-01 20:01:29.521484188 +0000 UTC m=+1.169385602"
	Dec 01 20:01:29 pause-138480 kubelet[1300]: I1201 20:01:29.535052    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-138480" podStartSLOduration=1.535027398 podStartE2EDuration="1.535027398s" podCreationTimestamp="2025-12-01 20:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:29.521689085 +0000 UTC m=+1.169590501" watchObservedRunningTime="2025-12-01 20:01:29.535027398 +0000 UTC m=+1.182928822"
	Dec 01 20:01:29 pause-138480 kubelet[1300]: I1201 20:01:29.548204    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-138480" podStartSLOduration=2.548184378 podStartE2EDuration="2.548184378s" podCreationTimestamp="2025-12-01 20:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:29.535392306 +0000 UTC m=+1.183293729" watchObservedRunningTime="2025-12-01 20:01:29.548184378 +0000 UTC m=+1.196085800"
	Dec 01 20:01:29 pause-138480 kubelet[1300]: I1201 20:01:29.564995    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-138480" podStartSLOduration=1.564971833 podStartE2EDuration="1.564971833s" podCreationTimestamp="2025-12-01 20:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:29.549082665 +0000 UTC m=+1.196984072" watchObservedRunningTime="2025-12-01 20:01:29.564971833 +0000 UTC m=+1.212873257"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.084607    1300 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.085638    1300 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969085    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1-xtables-lock\") pod \"kindnet-vp7xw\" (UID: \"11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1\") " pod="kube-system/kindnet-vp7xw"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969139    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8275c679-6ab0-4cf4-8499-44804b6c5e5f-lib-modules\") pod \"kube-proxy-fsrnk\" (UID: \"8275c679-6ab0-4cf4-8499-44804b6c5e5f\") " pod="kube-system/kube-proxy-fsrnk"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969165    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8khmp\" (UniqueName: \"kubernetes.io/projected/8275c679-6ab0-4cf4-8499-44804b6c5e5f-kube-api-access-8khmp\") pod \"kube-proxy-fsrnk\" (UID: \"8275c679-6ab0-4cf4-8499-44804b6c5e5f\") " pod="kube-system/kube-proxy-fsrnk"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969196    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1-lib-modules\") pod \"kindnet-vp7xw\" (UID: \"11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1\") " pod="kube-system/kindnet-vp7xw"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969228    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1-cni-cfg\") pod \"kindnet-vp7xw\" (UID: \"11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1\") " pod="kube-system/kindnet-vp7xw"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969248    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pbp7\" (UniqueName: \"kubernetes.io/projected/11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1-kube-api-access-9pbp7\") pod \"kindnet-vp7xw\" (UID: \"11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1\") " pod="kube-system/kindnet-vp7xw"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969268    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8275c679-6ab0-4cf4-8499-44804b6c5e5f-kube-proxy\") pod \"kube-proxy-fsrnk\" (UID: \"8275c679-6ab0-4cf4-8499-44804b6c5e5f\") " pod="kube-system/kube-proxy-fsrnk"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969303    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8275c679-6ab0-4cf4-8499-44804b6c5e5f-xtables-lock\") pod \"kube-proxy-fsrnk\" (UID: \"8275c679-6ab0-4cf4-8499-44804b6c5e5f\") " pod="kube-system/kube-proxy-fsrnk"
	Dec 01 20:01:34 pause-138480 kubelet[1300]: I1201 20:01:34.481210    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vp7xw" podStartSLOduration=1.4811878809999999 podStartE2EDuration="1.481187881s" podCreationTimestamp="2025-12-01 20:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:34.481055581 +0000 UTC m=+6.128957041" watchObservedRunningTime="2025-12-01 20:01:34.481187881 +0000 UTC m=+6.129089303"
	Dec 01 20:01:34 pause-138480 kubelet[1300]: I1201 20:01:34.490924    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fsrnk" podStartSLOduration=1.490905152 podStartE2EDuration="1.490905152s" podCreationTimestamp="2025-12-01 20:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:34.490827657 +0000 UTC m=+6.138729077" watchObservedRunningTime="2025-12-01 20:01:34.490905152 +0000 UTC m=+6.138806574"
	Dec 01 20:01:45 pause-138480 kubelet[1300]: I1201 20:01:45.169730    1300 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 01 20:01:45 pause-138480 kubelet[1300]: I1201 20:01:45.248950    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/068fa400-c778-4f67-bfd3-486ad54de1b0-config-volume\") pod \"coredns-66bc5c9577-jshbx\" (UID: \"068fa400-c778-4f67-bfd3-486ad54de1b0\") " pod="kube-system/coredns-66bc5c9577-jshbx"
	Dec 01 20:01:45 pause-138480 kubelet[1300]: I1201 20:01:45.249008    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgksc\" (UniqueName: \"kubernetes.io/projected/068fa400-c778-4f67-bfd3-486ad54de1b0-kube-api-access-pgksc\") pod \"coredns-66bc5c9577-jshbx\" (UID: \"068fa400-c778-4f67-bfd3-486ad54de1b0\") " pod="kube-system/coredns-66bc5c9577-jshbx"
	Dec 01 20:01:46 pause-138480 kubelet[1300]: I1201 20:01:46.523974    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jshbx" podStartSLOduration=12.523946523 podStartE2EDuration="12.523946523s" podCreationTimestamp="2025-12-01 20:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:46.511582548 +0000 UTC m=+18.159483972" watchObservedRunningTime="2025-12-01 20:01:46.523946523 +0000 UTC m=+18.171847947"
	Dec 01 20:01:54 pause-138480 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 20:01:54 pause-138480 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 20:01:54 pause-138480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 20:01:54 pause-138480 systemd[1]: kubelet.service: Consumed 1.237s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-138480 -n pause-138480
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-138480 -n pause-138480: exit status 2 (337.856317ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-138480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-138480
helpers_test.go:243: (dbg) docker inspect pause-138480:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54",
	        "Created": "2025-12-01T20:01:09.106126046Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 247254,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:01:09.147235238Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54/hostname",
	        "HostsPath": "/var/lib/docker/containers/ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54/hosts",
	        "LogPath": "/var/lib/docker/containers/ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54/ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54-json.log",
	        "Name": "/pause-138480",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-138480:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-138480",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ba1063f28b44cc93afaf745504265d8640ae574309120d2e7b72c6a3f0097c54",
	                "LowerDir": "/var/lib/docker/overlay2/52d2b0871669b8088b37ecfc73a93f318dff37988038668b3944d3fd779d39d2-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/52d2b0871669b8088b37ecfc73a93f318dff37988038668b3944d3fd779d39d2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/52d2b0871669b8088b37ecfc73a93f318dff37988038668b3944d3fd779d39d2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/52d2b0871669b8088b37ecfc73a93f318dff37988038668b3944d3fd779d39d2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-138480",
	                "Source": "/var/lib/docker/volumes/pause-138480/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-138480",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-138480",
	                "name.minikube.sigs.k8s.io": "pause-138480",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "acb5d8d2d5a46a1df52bc5da01069eaab6e7dd126f11b3f989351a4e67f3632f",
	            "SandboxKey": "/var/run/docker/netns/acb5d8d2d5a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33038"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33039"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33042"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33040"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33041"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-138480": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40337cb2309f7a08534147b352cae4845d06a26f7aa8a750c539f4406f96f200",
	                    "EndpointID": "a45011e4a674d125dea1f0e62efa0e1c20069db3c81342f14abc75396775916b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f2:7e:d9:09:4c:f2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-138480",
	                        "ba1063f28b44"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-138480 -n pause-138480
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-138480 -n pause-138480: exit status 2 (328.005771ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-138480 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-551864 sudo systemctl cat crio --no-pager                                                                                                                                                                       │ cilium-551864             │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │                     │
	│ ssh     │ -p cilium-551864 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                             │ cilium-551864             │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │                     │
	│ ssh     │ -p NoKubernetes-684883 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-684883       │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │                     │
	│ ssh     │ -p cilium-551864 sudo crio config                                                                                                                                                                                         │ cilium-551864             │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │                     │
	│ delete  │ -p cilium-551864                                                                                                                                                                                                          │ cilium-551864             │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:00 UTC │
	│ start   │ -p force-systemd-flag-882623 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-882623 │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:00 UTC │
	│ delete  │ -p NoKubernetes-684883                                                                                                                                                                                                    │ NoKubernetes-684883       │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:00 UTC │
	│ delete  │ -p missing-upgrade-675228                                                                                                                                                                                                 │ missing-upgrade-675228    │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:00 UTC │
	│ start   │ -p cert-expiration-453210 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-453210    │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:01 UTC │
	│ start   │ -p cert-options-488320 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-488320       │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:01 UTC │
	│ delete  │ -p force-systemd-env-457376                                                                                                                                                                                               │ force-systemd-env-457376  │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:00 UTC │
	│ start   │ -p kubernetes-upgrade-189963 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-189963 │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:01 UTC │
	│ ssh     │ force-systemd-flag-882623 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-882623 │ jenkins │ v1.37.0 │ 01 Dec 25 20:00 UTC │ 01 Dec 25 20:01 UTC │
	│ delete  │ -p force-systemd-flag-882623                                                                                                                                                                                              │ force-systemd-flag-882623 │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ start   │ -p pause-138480 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-138480              │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ ssh     │ cert-options-488320 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-488320       │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ ssh     │ -p cert-options-488320 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-488320       │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ delete  │ -p cert-options-488320                                                                                                                                                                                                    │ cert-options-488320       │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ start   │ -p stopped-upgrade-533630 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-533630    │ jenkins │ v1.35.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ stop    │ -p kubernetes-upgrade-189963                                                                                                                                                                                              │ kubernetes-upgrade-189963 │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ start   │ -p kubernetes-upgrade-189963 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                           │ kubernetes-upgrade-189963 │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │                     │
	│ stop    │ stopped-upgrade-533630 stop                                                                                                                                                                                               │ stopped-upgrade-533630    │ jenkins │ v1.35.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ start   │ -p stopped-upgrade-533630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-533630    │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │                     │
	│ start   │ -p pause-138480 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-138480              │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:01 UTC │
	│ pause   │ -p pause-138480 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-138480              │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:01:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:01:48.296496  258591 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:01:48.296803  258591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:01:48.296813  258591 out.go:374] Setting ErrFile to fd 2...
	I1201 20:01:48.296820  258591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:01:48.297108  258591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:01:48.297662  258591 out.go:368] Setting JSON to false
	I1201 20:01:48.299168  258591 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6259,"bootTime":1764613049,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:01:48.299241  258591 start.go:143] virtualization: kvm guest
	I1201 20:01:48.301345  258591 out.go:179] * [pause-138480] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:01:48.302636  258591 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:01:48.302650  258591 notify.go:221] Checking for updates...
	I1201 20:01:48.304985  258591 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:01:48.306108  258591 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:01:48.307465  258591 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:01:48.308682  258591 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:01:48.309898  258591 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:01:48.311695  258591 config.go:182] Loaded profile config "pause-138480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:01:48.312495  258591 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:01:48.337903  258591 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:01:48.338055  258591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:01:48.400909  258591 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:86 OomKillDisable:false NGoroutines:94 SystemTime:2025-12-01 20:01:48.389745621 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:01:48.401007  258591 docker.go:319] overlay module found
	I1201 20:01:48.403059  258591 out.go:179] * Using the docker driver based on existing profile
	I1201 20:01:48.404643  258591 start.go:309] selected driver: docker
	I1201 20:01:48.404661  258591 start.go:927] validating driver "docker" against &{Name:pause-138480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-138480 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:01:48.404819  258591 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:01:48.404918  258591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:01:48.467871  258591 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:86 OomKillDisable:false NGoroutines:94 SystemTime:2025-12-01 20:01:48.457393488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:01:48.468750  258591 cni.go:84] Creating CNI manager for ""
	I1201 20:01:48.468832  258591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:01:48.468896  258591 start.go:353] cluster config:
	{Name:pause-138480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-138480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:01:48.471073  258591 out.go:179] * Starting "pause-138480" primary control-plane node in "pause-138480" cluster
	I1201 20:01:48.472590  258591 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:01:48.474003  258591 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:01:48.477839  258591 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:01:48.477900  258591 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 20:01:48.477910  258591 cache.go:65] Caching tarball of preloaded images
	I1201 20:01:48.478003  258591 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:01:48.478013  258591 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:01:48.478073  258591 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:01:48.478480  258591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/config.json ...
	I1201 20:01:48.502668  258591 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:01:48.502690  258591 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:01:48.502711  258591 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:01:48.502744  258591 start.go:360] acquireMachinesLock for pause-138480: {Name:mk727318775028220acac14a603629c3450ad024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:01:48.502808  258591 start.go:364] duration metric: took 42.707µs to acquireMachinesLock for "pause-138480"
	I1201 20:01:48.502831  258591 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:01:48.502841  258591 fix.go:54] fixHost starting: 
	I1201 20:01:48.503071  258591 cli_runner.go:164] Run: docker container inspect pause-138480 --format={{.State.Status}}
	I1201 20:01:48.520963  258591 fix.go:112] recreateIfNeeded on pause-138480: state=Running err=<nil>
	W1201 20:01:48.521016  258591 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 20:01:46.247093  256706 cli_runner.go:164] Run: docker network inspect stopped-upgrade-533630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:01:46.265466  256706 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1201 20:01:46.269534  256706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:01:46.281148  256706 kubeadm.go:884] updating cluster {Name:stopped-upgrade-533630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-533630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:01:46.281274  256706 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1201 20:01:46.281351  256706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:01:46.325641  256706 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:01:46.325660  256706 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:01:46.325700  256706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:01:46.360927  256706 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:01:46.360949  256706 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:01:46.360957  256706 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.32.0 crio true true} ...
	I1201 20:01:46.361048  256706 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-533630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-533630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:01:46.361110  256706 ssh_runner.go:195] Run: crio config
	I1201 20:01:46.407338  256706 cni.go:84] Creating CNI manager for ""
	I1201 20:01:46.407367  256706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:01:46.407390  256706 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:01:46.407420  256706 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-533630 NodeName:stopped-upgrade-533630 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:01:46.407584  256706 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-533630"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:01:46.407662  256706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1201 20:01:46.417322  256706 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:01:46.417387  256706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:01:46.426190  256706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1201 20:01:46.446143  256706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:01:46.466565  256706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1201 20:01:46.485654  256706 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:01:46.489522  256706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:01:46.501673  256706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:01:46.580090  256706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:01:46.607063  256706 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630 for IP: 192.168.103.2
	I1201 20:01:46.607084  256706 certs.go:195] generating shared ca certs ...
	I1201 20:01:46.607099  256706 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:01:46.607244  256706 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:01:46.607326  256706 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:01:46.607345  256706 certs.go:257] generating profile certs ...
	I1201 20:01:46.607426  256706 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/client.key
	I1201 20:01:46.607484  256706 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/apiserver.key.652c7607
	I1201 20:01:46.607525  256706 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/proxy-client.key
	I1201 20:01:46.607630  256706 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:01:46.607661  256706 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:01:46.607670  256706 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:01:46.607694  256706 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:01:46.607721  256706 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:01:46.607745  256706 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:01:46.607803  256706 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:01:46.608388  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:01:46.635839  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:01:46.663714  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:01:46.697100  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:01:46.722036  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1201 20:01:46.746300  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1201 20:01:46.771548  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:01:46.798105  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 20:01:46.824195  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:01:46.849745  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:01:46.874599  256706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:01:46.900088  256706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:01:46.918033  256706 ssh_runner.go:195] Run: openssl version
	I1201 20:01:46.923681  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:01:46.933886  256706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:01:46.937424  256706 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:01:46.937475  256706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:01:46.944328  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:01:46.953755  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:01:46.964027  256706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:01:46.967801  256706 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:01:46.967847  256706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:01:46.975270  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:01:46.985118  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:01:46.995807  256706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:01:46.999421  256706 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:01:46.999501  256706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:01:47.007007  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:01:47.016429  256706 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:01:47.020123  256706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:01:47.026953  256706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:01:47.033500  256706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:01:47.040007  256706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:01:47.046897  256706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:01:47.053614  256706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:01:47.060520  256706 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-533630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-533630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:01:47.060611  256706 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:01:47.060654  256706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:01:47.096234  256706 cri.go:89] found id: ""
	I1201 20:01:47.096330  256706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:01:47.106763  256706 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:01:47.106782  256706 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:01:47.106827  256706 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:01:47.116526  256706 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:01:47.117198  256706 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-533630" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:01:47.117625  256706 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-533630" cluster setting kubeconfig missing "stopped-upgrade-533630" context setting]
	I1201 20:01:47.118208  256706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:01:47.118959  256706 kapi.go:59] client config for stopped-upgrade-533630: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/profiles/stopped-upgrade-533630/client.key", CAFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 20:01:47.119404  256706 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 20:01:47.119421  256706 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 20:01:47.119425  256706 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 20:01:47.119429  256706 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 20:01:47.119433  256706 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 20:01:47.119757  256706 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:01:47.130741  256706 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-01 20:01:24.135159207 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-01 20:01:46.482506519 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1201 20:01:47.130763  256706 kubeadm.go:1161] stopping kube-system containers ...
	I1201 20:01:47.130776  256706 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1201 20:01:47.130826  256706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:01:47.172705  256706 cri.go:89] found id: ""
	I1201 20:01:47.172772  256706 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1201 20:01:47.241173  256706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:01:47.253047  256706 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5651 Dec  1 20:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Dec  1 20:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec  1 20:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Dec  1 20:01 /etc/kubernetes/scheduler.conf
	
	I1201 20:01:47.253166  256706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 20:01:47.263967  256706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 20:01:47.273607  256706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 20:01:47.286156  256706 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:01:47.286211  256706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:01:47.295367  256706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 20:01:47.304264  256706 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:01:47.304336  256706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:01:47.312879  256706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:01:47.322565  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:01:47.367141  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:01:48.375045  256706 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.00787221s)
	I1201 20:01:48.375110  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:01:48.572499  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:01:48.626357  256706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:01:48.679203  256706 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:01:48.679270  256706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:01:49.179460  256706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:01:49.196349  256706 api_server.go:72] duration metric: took 517.155078ms to wait for apiserver process to appear ...
	I1201 20:01:49.196379  256706 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:01:49.196567  256706 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1201 20:01:49.196954  256706 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1201 20:01:48.523159  258591 out.go:252] * Updating the running docker "pause-138480" container ...
	I1201 20:01:48.523189  258591 machine.go:94] provisionDockerMachine start ...
	I1201 20:01:48.523253  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:48.543167  258591 main.go:143] libmachine: Using SSH client type: native
	I1201 20:01:48.543726  258591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1201 20:01:48.543746  258591 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:01:48.691618  258591 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-138480
	
	I1201 20:01:48.691649  258591 ubuntu.go:182] provisioning hostname "pause-138480"
	I1201 20:01:48.691716  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:48.709561  258591 main.go:143] libmachine: Using SSH client type: native
	I1201 20:01:48.709773  258591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1201 20:01:48.709785  258591 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-138480 && echo "pause-138480" | sudo tee /etc/hostname
	I1201 20:01:48.859875  258591 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-138480
	
	I1201 20:01:48.859960  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:48.878060  258591 main.go:143] libmachine: Using SSH client type: native
	I1201 20:01:48.878279  258591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1201 20:01:48.878320  258591 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-138480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-138480/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-138480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:01:49.019925  258591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:01:49.019951  258591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:01:49.019989  258591 ubuntu.go:190] setting up certificates
	I1201 20:01:49.019999  258591 provision.go:84] configureAuth start
	I1201 20:01:49.020045  258591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-138480
	I1201 20:01:49.038016  258591 provision.go:143] copyHostCerts
	I1201 20:01:49.038086  258591 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:01:49.038096  258591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:01:49.038169  258591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:01:49.038334  258591 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:01:49.038346  258591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:01:49.038391  258591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:01:49.038469  258591 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:01:49.038477  258591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:01:49.038501  258591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:01:49.038565  258591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.pause-138480 san=[127.0.0.1 192.168.94.2 localhost minikube pause-138480]
	I1201 20:01:49.193048  258591 provision.go:177] copyRemoteCerts
	I1201 20:01:49.193117  258591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:01:49.193166  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:49.215321  258591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/pause-138480/id_rsa Username:docker}
	I1201 20:01:49.323235  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:01:49.343342  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1201 20:01:49.362720  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:01:49.382766  258591 provision.go:87] duration metric: took 362.751403ms to configureAuth
	I1201 20:01:49.382796  258591 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:01:49.382986  258591 config.go:182] Loaded profile config "pause-138480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:01:49.383082  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:49.404511  258591 main.go:143] libmachine: Using SSH client type: native
	I1201 20:01:49.404830  258591 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1201 20:01:49.404853  258591 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:01:49.755936  258591 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:01:49.755967  258591 machine.go:97] duration metric: took 1.232769592s to provisionDockerMachine
	I1201 20:01:49.755982  258591 start.go:293] postStartSetup for "pause-138480" (driver="docker")
	I1201 20:01:49.755994  258591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:01:49.756059  258591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:01:49.756130  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:49.779437  258591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/pause-138480/id_rsa Username:docker}
	I1201 20:01:49.883882  258591 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:01:49.887899  258591 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:01:49.887939  258591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:01:49.887953  258591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:01:49.888011  258591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:01:49.888118  258591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:01:49.888245  258591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:01:49.896446  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:01:49.914496  258591 start.go:296] duration metric: took 158.499772ms for postStartSetup
	I1201 20:01:49.914577  258591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:01:49.914621  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:49.935445  258591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/pause-138480/id_rsa Username:docker}
	I1201 20:01:50.033764  258591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:01:50.038866  258591 fix.go:56] duration metric: took 1.536019478s for fixHost
	I1201 20:01:50.038892  258591 start.go:83] releasing machines lock for "pause-138480", held for 1.53607067s
	I1201 20:01:50.038958  258591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-138480
	I1201 20:01:50.059411  258591 ssh_runner.go:195] Run: cat /version.json
	I1201 20:01:50.059462  258591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:01:50.059467  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:50.059520  258591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-138480
	I1201 20:01:50.080444  258591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/pause-138480/id_rsa Username:docker}
	I1201 20:01:50.080972  258591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/pause-138480/id_rsa Username:docker}
	I1201 20:01:50.231317  258591 ssh_runner.go:195] Run: systemctl --version
	I1201 20:01:50.237933  258591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:01:50.274035  258591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:01:50.278837  258591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:01:50.278898  258591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:01:50.287530  258591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:01:50.287555  258591 start.go:496] detecting cgroup driver to use...
	I1201 20:01:50.287581  258591 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:01:50.287619  258591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:01:50.302434  258591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:01:50.314445  258591 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:01:50.314504  258591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:01:50.329610  258591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:01:50.342663  258591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:01:50.452178  258591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:01:50.558753  258591 docker.go:234] disabling docker service ...
	I1201 20:01:50.558824  258591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:01:50.575244  258591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:01:50.587384  258591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:01:50.696355  258591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:01:50.805504  258591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:01:50.818727  258591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:01:50.833788  258591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:01:50.833840  258591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.843242  258591 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:01:50.843325  258591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.852894  258591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.861719  258591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.870459  258591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:01:50.878640  258591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.888266  258591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.897109  258591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:01:50.906013  258591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:01:50.913401  258591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:01:50.920525  258591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:01:51.028647  258591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:01:51.212541  258591 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:01:51.212606  258591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:01:51.216757  258591 start.go:564] Will wait 60s for crictl version
	I1201 20:01:51.216804  258591 ssh_runner.go:195] Run: which crictl
	I1201 20:01:51.220510  258591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:01:51.243345  258591 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:01:51.243433  258591 ssh_runner.go:195] Run: crio --version
	I1201 20:01:51.271118  258591 ssh_runner.go:195] Run: crio --version
	I1201 20:01:51.301210  258591 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 20:01:51.302480  258591 cli_runner.go:164] Run: docker network inspect pause-138480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:01:51.320387  258591 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1201 20:01:51.324851  258591 kubeadm.go:884] updating cluster {Name:pause-138480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-138480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:01:51.324982  258591 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:01:51.325032  258591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:01:51.358493  258591 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:01:51.358516  258591 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:01:51.358583  258591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:01:51.384824  258591 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:01:51.384845  258591 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:01:51.384851  258591 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1201 20:01:51.384941  258591 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-138480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-138480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:01:51.384997  258591 ssh_runner.go:195] Run: crio config
	I1201 20:01:51.429203  258591 cni.go:84] Creating CNI manager for ""
	I1201 20:01:51.429219  258591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:01:51.429232  258591 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:01:51.429252  258591 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-138480 NodeName:pause-138480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:01:51.429391  258591 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-138480"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:01:51.429464  258591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:01:51.437537  258591 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:01:51.437592  258591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:01:51.445591  258591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1201 20:01:51.458956  258591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:01:51.471771  258591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1201 20:01:51.484146  258591 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:01:51.488186  258591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:01:51.598857  258591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:01:51.612324  258591 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480 for IP: 192.168.94.2
	I1201 20:01:51.612347  258591 certs.go:195] generating shared ca certs ...
	I1201 20:01:51.612365  258591 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:01:51.612533  258591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:01:51.612595  258591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:01:51.612614  258591 certs.go:257] generating profile certs ...
	I1201 20:01:51.612719  258591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/client.key
	I1201 20:01:51.612800  258591 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/apiserver.key.2b63a7b4
	I1201 20:01:51.612854  258591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/proxy-client.key
	I1201 20:01:51.612988  258591 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:01:51.613033  258591 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:01:51.613047  258591 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:01:51.613098  258591 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:01:51.613134  258591 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:01:51.613170  258591 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:01:51.613232  258591 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:01:51.613874  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:01:51.632936  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:01:51.651712  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:01:51.669081  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:01:51.687196  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1201 20:01:51.707112  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1201 20:01:51.725579  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:01:51.743627  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:01:51.760984  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:01:51.779684  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:01:51.797590  258591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:01:51.815168  258591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:01:51.827923  258591 ssh_runner.go:195] Run: openssl version
	I1201 20:01:51.833892  258591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:01:51.843757  258591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:01:51.847970  258591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:01:51.848030  258591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:01:51.882427  258591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:01:51.891251  258591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:01:51.899999  258591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:01:51.903544  258591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:01:51.903588  258591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:01:51.937939  258591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:01:51.946465  258591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:01:51.955083  258591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:01:51.958873  258591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:01:51.958928  258591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:01:51.996390  258591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:01:52.005004  258591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:01:52.009202  258591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:01:52.044406  258591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:01:52.080009  258591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:01:52.116030  258591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:01:52.150503  258591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:01:52.186262  258591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:01:52.220810  258591 kubeadm.go:401] StartCluster: {Name:pause-138480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-138480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:01:52.220931  258591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:01:52.220987  258591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:01:52.248505  258591 cri.go:89] found id: "147ea3d08b8b4e2b0312515352191f9268f8a7433d6a7d8bb19746f78b54f14a"
	I1201 20:01:52.248525  258591 cri.go:89] found id: "ad04cccc2b8167eae9f5df2b23c89f5badefb46914c63c3aa5b16de977a75f91"
	I1201 20:01:52.248528  258591 cri.go:89] found id: "b4dadfb6ece008e12b30cb292e823857812381b746509a5ce67d54a76e5a7412"
	I1201 20:01:52.248535  258591 cri.go:89] found id: "374d5f1391210ce8d3c2aa9df74962c74263e93470f5829282d7d0fe1a2abf78"
	I1201 20:01:52.248538  258591 cri.go:89] found id: "0158754becba52c503e34cf4d84eb782e05f945cfd408718af2628825910395c"
	I1201 20:01:52.248541  258591 cri.go:89] found id: "2b9385f7318cad2b2fde74e9cb42911f55fd46226bd147e49c2db9e0a2670327"
	I1201 20:01:52.248543  258591 cri.go:89] found id: "593a14807bb0c68571bd6d5ece24497643fb13fa1416316ff415bae42a99b78c"
	I1201 20:01:52.248546  258591 cri.go:89] found id: ""
	I1201 20:01:52.248580  258591 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:01:52.259987  258591 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:01:52Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:01:52.260060  258591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:01:52.268404  258591 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:01:52.268420  258591 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:01:52.268456  258591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:01:52.275927  258591 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:01:52.276710  258591 kubeconfig.go:125] found "pause-138480" server: "https://192.168.94.2:8443"
	I1201 20:01:52.277744  258591 kapi.go:59] client config for pause-138480: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/client.key", CAFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 20:01:52.278153  258591 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 20:01:52.278170  258591 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 20:01:52.278175  258591 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 20:01:52.278182  258591 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 20:01:52.278192  258591 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 20:01:52.278507  258591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:01:52.286439  258591 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1201 20:01:52.286474  258591 kubeadm.go:602] duration metric: took 18.04795ms to restartPrimaryControlPlane
	I1201 20:01:52.286493  258591 kubeadm.go:403] duration metric: took 65.694022ms to StartCluster
	I1201 20:01:52.286510  258591 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:01:52.286578  258591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:01:52.287702  258591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:01:52.287944  258591 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:01:52.288006  258591 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:01:52.288185  258591 config.go:182] Loaded profile config "pause-138480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:01:52.290447  258591 out.go:179] * Verifying Kubernetes components...
	I1201 20:01:52.290451  258591 out.go:179] * Enabled addons: 
	I1201 20:01:52.291607  258591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:01:52.291614  258591 addons.go:530] duration metric: took 3.615223ms for enable addons: enabled=[]
	I1201 20:01:52.400926  258591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:01:52.414087  258591 node_ready.go:35] waiting up to 6m0s for node "pause-138480" to be "Ready" ...
	I1201 20:01:52.421569  258591 node_ready.go:49] node "pause-138480" is "Ready"
	I1201 20:01:52.421588  258591 node_ready.go:38] duration metric: took 7.453291ms for node "pause-138480" to be "Ready" ...
	I1201 20:01:52.421598  258591 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:01:52.421636  258591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:01:52.432636  258591 api_server.go:72] duration metric: took 144.662421ms to wait for apiserver process to appear ...
	I1201 20:01:52.432655  258591 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:01:52.432668  258591 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1201 20:01:52.436553  258591 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1201 20:01:52.437457  258591 api_server.go:141] control plane version: v1.34.2
	I1201 20:01:52.437476  258591 api_server.go:131] duration metric: took 4.816226ms to wait for apiserver health ...
	I1201 20:01:52.437484  258591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:01:52.440783  258591 system_pods.go:59] 7 kube-system pods found
	I1201 20:01:52.440806  258591 system_pods.go:61] "coredns-66bc5c9577-jshbx" [068fa400-c778-4f67-bfd3-486ad54de1b0] Running
	I1201 20:01:52.440811  258591 system_pods.go:61] "etcd-pause-138480" [a1f5bd7d-aae7-4872-ab98-9320a44d42c5] Running
	I1201 20:01:52.440815  258591 system_pods.go:61] "kindnet-vp7xw" [11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1] Running
	I1201 20:01:52.440818  258591 system_pods.go:61] "kube-apiserver-pause-138480" [102e70a9-fc2d-4d05-b477-a0acef9150be] Running
	I1201 20:01:52.440821  258591 system_pods.go:61] "kube-controller-manager-pause-138480" [a69a7a4f-bc07-49df-9e38-844936cb2885] Running
	I1201 20:01:52.440824  258591 system_pods.go:61] "kube-proxy-fsrnk" [8275c679-6ab0-4cf4-8499-44804b6c5e5f] Running
	I1201 20:01:52.440828  258591 system_pods.go:61] "kube-scheduler-pause-138480" [a7d1d4e0-5612-424c-be48-3f9af521b448] Running
	I1201 20:01:52.440833  258591 system_pods.go:74] duration metric: took 3.34381ms to wait for pod list to return data ...
	I1201 20:01:52.440839  258591 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:01:52.442876  258591 default_sa.go:45] found service account: "default"
	I1201 20:01:52.442898  258591 default_sa.go:55] duration metric: took 2.053979ms for default service account to be created ...
	I1201 20:01:52.442908  258591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:01:52.445509  258591 system_pods.go:86] 7 kube-system pods found
	I1201 20:01:52.445530  258591 system_pods.go:89] "coredns-66bc5c9577-jshbx" [068fa400-c778-4f67-bfd3-486ad54de1b0] Running
	I1201 20:01:52.445535  258591 system_pods.go:89] "etcd-pause-138480" [a1f5bd7d-aae7-4872-ab98-9320a44d42c5] Running
	I1201 20:01:52.445542  258591 system_pods.go:89] "kindnet-vp7xw" [11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1] Running
	I1201 20:01:52.445546  258591 system_pods.go:89] "kube-apiserver-pause-138480" [102e70a9-fc2d-4d05-b477-a0acef9150be] Running
	I1201 20:01:52.445549  258591 system_pods.go:89] "kube-controller-manager-pause-138480" [a69a7a4f-bc07-49df-9e38-844936cb2885] Running
	I1201 20:01:52.445552  258591 system_pods.go:89] "kube-proxy-fsrnk" [8275c679-6ab0-4cf4-8499-44804b6c5e5f] Running
	I1201 20:01:52.445555  258591 system_pods.go:89] "kube-scheduler-pause-138480" [a7d1d4e0-5612-424c-be48-3f9af521b448] Running
	I1201 20:01:52.445562  258591 system_pods.go:126] duration metric: took 2.648365ms to wait for k8s-apps to be running ...
	I1201 20:01:52.445572  258591 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:01:52.445612  258591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:01:52.458885  258591 system_svc.go:56] duration metric: took 13.30547ms WaitForService to wait for kubelet
	I1201 20:01:52.458909  258591 kubeadm.go:587] duration metric: took 170.937748ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:01:52.458928  258591 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:01:52.461796  258591 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:01:52.461819  258591 node_conditions.go:123] node cpu capacity is 8
	I1201 20:01:52.461830  258591 node_conditions.go:105] duration metric: took 2.895264ms to run NodePressure ...
	I1201 20:01:52.461840  258591 start.go:242] waiting for startup goroutines ...
	I1201 20:01:52.461846  258591 start.go:247] waiting for cluster config update ...
	I1201 20:01:52.461854  258591 start.go:256] writing updated cluster config ...
	I1201 20:01:52.462134  258591 ssh_runner.go:195] Run: rm -f paused
	I1201 20:01:52.465884  258591 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:01:52.466593  258591 kapi.go:59] client config for pause-138480: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/profiles/pause-138480/client.key", CAFile:"/home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 20:01:52.469170  258591 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jshbx" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.473634  258591 pod_ready.go:94] pod "coredns-66bc5c9577-jshbx" is "Ready"
	I1201 20:01:52.473656  258591 pod_ready.go:86] duration metric: took 4.46582ms for pod "coredns-66bc5c9577-jshbx" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.475557  258591 pod_ready.go:83] waiting for pod "etcd-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.478961  258591 pod_ready.go:94] pod "etcd-pause-138480" is "Ready"
	I1201 20:01:52.478978  258591 pod_ready.go:86] duration metric: took 3.405491ms for pod "etcd-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.480712  258591 pod_ready.go:83] waiting for pod "kube-apiserver-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.484155  258591 pod_ready.go:94] pod "kube-apiserver-pause-138480" is "Ready"
	I1201 20:01:52.484174  258591 pod_ready.go:86] duration metric: took 3.444736ms for pod "kube-apiserver-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.485924  258591 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:52.869969  258591 pod_ready.go:94] pod "kube-controller-manager-pause-138480" is "Ready"
	I1201 20:01:52.869994  258591 pod_ready.go:86] duration metric: took 384.047726ms for pod "kube-controller-manager-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:53.070141  258591 pod_ready.go:83] waiting for pod "kube-proxy-fsrnk" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:53.470250  258591 pod_ready.go:94] pod "kube-proxy-fsrnk" is "Ready"
	I1201 20:01:53.470277  258591 pod_ready.go:86] duration metric: took 400.11172ms for pod "kube-proxy-fsrnk" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:53.670301  258591 pod_ready.go:83] waiting for pod "kube-scheduler-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:54.070387  258591 pod_ready.go:94] pod "kube-scheduler-pause-138480" is "Ready"
	I1201 20:01:54.070414  258591 pod_ready.go:86] duration metric: took 400.076561ms for pod "kube-scheduler-pause-138480" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:01:54.070429  258591 pod_ready.go:40] duration metric: took 1.60451867s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:01:54.112985  258591 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 20:01:54.115508  258591 out.go:179] * Done! kubectl is now configured to use "pause-138480" cluster and "default" namespace by default
	I1201 20:01:49.697448  256706 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.127647121Z" level=info msg="RDT not available in the host system"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.127658283Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.128502122Z" level=info msg="Conmon does support the --sync option"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.128518668Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.128531625Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.12923598Z" level=info msg="Conmon does support the --sync option"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.129248892Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.133028036Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.133058244Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.133647706Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.134031607Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.134090029Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.207569266Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-jshbx Namespace:kube-system ID:ef391e39fc796a54701f0831c2ef2ca23471f4892dc771bf09f273ea2ac68d12 UID:068fa400-c778-4f67-bfd3-486ad54de1b0 NetNS:/var/run/netns/cc31ca13-4e82-4dd1-9c8c-711fe40fcdf6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132ac0}] Aliases:map[]}"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.207798154Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-jshbx for CNI network kindnet (type=ptp)"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.20824342Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208270419Z" level=info msg="Starting seccomp notifier watcher"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208342058Z" level=info msg="Create NRI interface"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208474438Z" level=info msg="built-in NRI default validator is disabled"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.20848913Z" level=info msg="runtime interface created"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208502509Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208511453Z" level=info msg="runtime interface starting up..."
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208519282Z" level=info msg="starting plugins..."
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208534444Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 01 20:01:51 pause-138480 crio[2138]: time="2025-12-01T20:01:51.208925316Z" level=info msg="No systemd watchdog enabled"
	Dec 01 20:01:51 pause-138480 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	147ea3d08b8b4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   ef391e39fc796       coredns-66bc5c9577-jshbx               kube-system
	ad04cccc2b816       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   24 seconds ago      Running             kube-proxy                0                   25c11b9a6413d       kube-proxy-fsrnk                       kube-system
	b4dadfb6ece00       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   3520acc84aaf5       kindnet-vp7xw                          kube-system
	374d5f1391210       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   35 seconds ago      Running             kube-scheduler            0                   5ddcfc14aa78a       kube-scheduler-pause-138480            kube-system
	0158754becba5       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   35 seconds ago      Running             kube-controller-manager   0                   a6a78829531d9       kube-controller-manager-pause-138480   kube-system
	2b9385f7318ca       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   35 seconds ago      Running             etcd                      0                   155fa5407ea25       etcd-pause-138480                      kube-system
	593a14807bb0c       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   35 seconds ago      Running             kube-apiserver            0                   59736cb10512c       kube-apiserver-pause-138480            kube-system
	
	
	==> coredns [147ea3d08b8b4e2b0312515352191f9268f8a7433d6a7d8bb19746f78b54f14a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43674 - 15459 "HINFO IN 34557710388052300.8662693174993530794. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.030180416s
	
	
	==> describe nodes <==
	Name:               pause-138480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-138480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=pause-138480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_01_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:01:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-138480
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:01:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:01:49 +0000   Mon, 01 Dec 2025 20:01:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:01:49 +0000   Mon, 01 Dec 2025 20:01:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:01:49 +0000   Mon, 01 Dec 2025 20:01:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:01:49 +0000   Mon, 01 Dec 2025 20:01:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-138480
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                efb0169e-3bf7-4721-9a62-4283b1d5ce1e
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-jshbx                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-138480                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-vp7xw                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-138480             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-138480    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-fsrnk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-138480             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node pause-138480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node pause-138480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node pause-138480 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node pause-138480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node pause-138480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node pause-138480 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node pause-138480 event: Registered Node pause-138480 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-138480 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091158] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023654] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.003803] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 1 19:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.060605] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023816] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023890] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +1.023874] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +2.047751] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +4.031647] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[  +8.063094] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[Dec 1 19:09] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	[ +32.252518] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 86 74 63 ae 6d 8f ae bc e5 a7 b5 a4 08 00
	
	
	==> etcd [2b9385f7318cad2b2fde74e9cb42911f55fd46226bd147e49c2db9e0a2670327] <==
	{"level":"warn","ts":"2025-12-01T20:01:25.111242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.119531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.128021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.139514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.148093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.157144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.168109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.181418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.189976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.198966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.207271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.233902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.241707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.251030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.260009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.268473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.277760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.284443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.302871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.312077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.329722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.334555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.345225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.355606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:01:25.418275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40198","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:01:59 up  1:44,  0 user,  load average: 5.00, 3.10, 1.89
	Linux pause-138480 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b4dadfb6ece008e12b30cb292e823857812381b746509a5ce67d54a76e5a7412] <==
	I1201 20:01:34.449923       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:01:34.450165       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1201 20:01:34.543718       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:01:34.543754       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:01:34.543771       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:01:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:01:34.654227       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:01:34.654343       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:01:34.654373       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:01:34.654531       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:01:35.044645       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:01:35.044684       1 metrics.go:72] Registering metrics
	I1201 20:01:35.044736       1 controller.go:711] "Syncing nftables rules"
	I1201 20:01:44.653785       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:01:44.653833       1 main.go:301] handling current node
	I1201 20:01:54.655770       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:01:54.655881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [593a14807bb0c68571bd6d5ece24497643fb13fa1416316ff415bae42a99b78c] <==
	I1201 20:01:25.985803       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1201 20:01:25.985835       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1201 20:01:25.987390       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:01:25.987427       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1201 20:01:25.991866       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1201 20:01:25.992416       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:01:25.994069       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1201 20:01:26.195592       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:01:26.886658       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1201 20:01:26.890250       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1201 20:01:26.890270       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:01:27.353462       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:01:27.387615       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:01:27.492753       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1201 20:01:27.498941       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1201 20:01:27.499992       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:01:27.504271       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:01:27.937391       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:01:28.602550       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:01:28.612326       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1201 20:01:28.619967       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1201 20:01:33.791352       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:01:33.842997       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1201 20:01:33.891873       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:01:33.895713       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [0158754becba52c503e34cf4d84eb782e05f945cfd408718af2628825910395c] <==
	I1201 20:01:32.985843       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:01:32.985860       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1201 20:01:32.985869       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1201 20:01:32.986094       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1201 20:01:32.987143       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1201 20:01:32.987162       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1201 20:01:32.987189       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1201 20:01:32.987211       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1201 20:01:32.987239       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1201 20:01:32.987313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1201 20:01:32.987389       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1201 20:01:32.987410       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1201 20:01:32.987660       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1201 20:01:32.987717       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1201 20:01:32.987756       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1201 20:01:32.988876       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1201 20:01:32.988921       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1201 20:01:32.988982       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1201 20:01:32.989043       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-138480"
	I1201 20:01:32.989125       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1201 20:01:32.992710       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1201 20:01:32.995141       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1201 20:01:32.997876       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-138480" podCIDRs=["10.244.0.0/24"]
	I1201 20:01:33.006793       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:01:48.133236       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ad04cccc2b8167eae9f5df2b23c89f5badefb46914c63c3aa5b16de977a75f91] <==
	I1201 20:01:34.317955       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:01:34.387867       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 20:01:34.488551       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 20:01:34.488605       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1201 20:01:34.488709       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:01:34.511050       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:01:34.511122       1 server_linux.go:132] "Using iptables Proxier"
	I1201 20:01:34.516660       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:01:34.517026       1 server.go:527] "Version info" version="v1.34.2"
	I1201 20:01:34.517062       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:01:34.519032       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:01:34.519058       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:01:34.519076       1 config.go:309] "Starting node config controller"
	I1201 20:01:34.519096       1 config.go:200] "Starting service config controller"
	I1201 20:01:34.519103       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:01:34.519109       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:01:34.519101       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:01:34.519109       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:01:34.519120       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:01:34.619944       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:01:34.619968       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:01:34.619990       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [374d5f1391210ce8d3c2aa9df74962c74263e93470f5829282d7d0fe1a2abf78] <==
	E1201 20:01:25.949005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 20:01:25.949087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 20:01:25.949147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 20:01:25.949188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 20:01:25.949605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 20:01:25.949352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 20:01:25.949685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 20:01:25.949737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 20:01:25.949801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1201 20:01:25.949845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 20:01:25.949888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 20:01:25.949987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 20:01:25.949893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 20:01:25.950558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 20:01:26.847959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 20:01:26.914104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 20:01:26.923238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 20:01:27.006056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 20:01:27.017256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 20:01:27.070856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 20:01:27.070856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 20:01:27.155634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 20:01:27.188817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 20:01:27.384789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1201 20:01:29.844837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 20:01:29 pause-138480 kubelet[1300]: E1201 20:01:29.473822    1300 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-138480\" already exists" pod="kube-system/kube-apiserver-pause-138480"
	Dec 01 20:01:29 pause-138480 kubelet[1300]: I1201 20:01:29.521509    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-138480" podStartSLOduration=1.521484188 podStartE2EDuration="1.521484188s" podCreationTimestamp="2025-12-01 20:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:29.510158502 +0000 UTC m=+1.158059926" watchObservedRunningTime="2025-12-01 20:01:29.521484188 +0000 UTC m=+1.169385602"
	Dec 01 20:01:29 pause-138480 kubelet[1300]: I1201 20:01:29.535052    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-138480" podStartSLOduration=1.535027398 podStartE2EDuration="1.535027398s" podCreationTimestamp="2025-12-01 20:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:29.521689085 +0000 UTC m=+1.169590501" watchObservedRunningTime="2025-12-01 20:01:29.535027398 +0000 UTC m=+1.182928822"
	Dec 01 20:01:29 pause-138480 kubelet[1300]: I1201 20:01:29.548204    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-138480" podStartSLOduration=2.548184378 podStartE2EDuration="2.548184378s" podCreationTimestamp="2025-12-01 20:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:29.535392306 +0000 UTC m=+1.183293729" watchObservedRunningTime="2025-12-01 20:01:29.548184378 +0000 UTC m=+1.196085800"
	Dec 01 20:01:29 pause-138480 kubelet[1300]: I1201 20:01:29.564995    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-138480" podStartSLOduration=1.564971833 podStartE2EDuration="1.564971833s" podCreationTimestamp="2025-12-01 20:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:29.549082665 +0000 UTC m=+1.196984072" watchObservedRunningTime="2025-12-01 20:01:29.564971833 +0000 UTC m=+1.212873257"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.084607    1300 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.085638    1300 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969085    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1-xtables-lock\") pod \"kindnet-vp7xw\" (UID: \"11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1\") " pod="kube-system/kindnet-vp7xw"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969139    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8275c679-6ab0-4cf4-8499-44804b6c5e5f-lib-modules\") pod \"kube-proxy-fsrnk\" (UID: \"8275c679-6ab0-4cf4-8499-44804b6c5e5f\") " pod="kube-system/kube-proxy-fsrnk"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969165    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8khmp\" (UniqueName: \"kubernetes.io/projected/8275c679-6ab0-4cf4-8499-44804b6c5e5f-kube-api-access-8khmp\") pod \"kube-proxy-fsrnk\" (UID: \"8275c679-6ab0-4cf4-8499-44804b6c5e5f\") " pod="kube-system/kube-proxy-fsrnk"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969196    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1-lib-modules\") pod \"kindnet-vp7xw\" (UID: \"11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1\") " pod="kube-system/kindnet-vp7xw"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969228    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1-cni-cfg\") pod \"kindnet-vp7xw\" (UID: \"11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1\") " pod="kube-system/kindnet-vp7xw"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969248    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pbp7\" (UniqueName: \"kubernetes.io/projected/11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1-kube-api-access-9pbp7\") pod \"kindnet-vp7xw\" (UID: \"11a50e41-ad3e-42f3-ba8d-9cb87adb5ae1\") " pod="kube-system/kindnet-vp7xw"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969268    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8275c679-6ab0-4cf4-8499-44804b6c5e5f-kube-proxy\") pod \"kube-proxy-fsrnk\" (UID: \"8275c679-6ab0-4cf4-8499-44804b6c5e5f\") " pod="kube-system/kube-proxy-fsrnk"
	Dec 01 20:01:33 pause-138480 kubelet[1300]: I1201 20:01:33.969303    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8275c679-6ab0-4cf4-8499-44804b6c5e5f-xtables-lock\") pod \"kube-proxy-fsrnk\" (UID: \"8275c679-6ab0-4cf4-8499-44804b6c5e5f\") " pod="kube-system/kube-proxy-fsrnk"
	Dec 01 20:01:34 pause-138480 kubelet[1300]: I1201 20:01:34.481210    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vp7xw" podStartSLOduration=1.4811878809999999 podStartE2EDuration="1.481187881s" podCreationTimestamp="2025-12-01 20:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:34.481055581 +0000 UTC m=+6.128957041" watchObservedRunningTime="2025-12-01 20:01:34.481187881 +0000 UTC m=+6.129089303"
	Dec 01 20:01:34 pause-138480 kubelet[1300]: I1201 20:01:34.490924    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fsrnk" podStartSLOduration=1.490905152 podStartE2EDuration="1.490905152s" podCreationTimestamp="2025-12-01 20:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:34.490827657 +0000 UTC m=+6.138729077" watchObservedRunningTime="2025-12-01 20:01:34.490905152 +0000 UTC m=+6.138806574"
	Dec 01 20:01:45 pause-138480 kubelet[1300]: I1201 20:01:45.169730    1300 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 01 20:01:45 pause-138480 kubelet[1300]: I1201 20:01:45.248950    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/068fa400-c778-4f67-bfd3-486ad54de1b0-config-volume\") pod \"coredns-66bc5c9577-jshbx\" (UID: \"068fa400-c778-4f67-bfd3-486ad54de1b0\") " pod="kube-system/coredns-66bc5c9577-jshbx"
	Dec 01 20:01:45 pause-138480 kubelet[1300]: I1201 20:01:45.249008    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgksc\" (UniqueName: \"kubernetes.io/projected/068fa400-c778-4f67-bfd3-486ad54de1b0-kube-api-access-pgksc\") pod \"coredns-66bc5c9577-jshbx\" (UID: \"068fa400-c778-4f67-bfd3-486ad54de1b0\") " pod="kube-system/coredns-66bc5c9577-jshbx"
	Dec 01 20:01:46 pause-138480 kubelet[1300]: I1201 20:01:46.523974    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jshbx" podStartSLOduration=12.523946523 podStartE2EDuration="12.523946523s" podCreationTimestamp="2025-12-01 20:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:01:46.511582548 +0000 UTC m=+18.159483972" watchObservedRunningTime="2025-12-01 20:01:46.523946523 +0000 UTC m=+18.171847947"
	Dec 01 20:01:54 pause-138480 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 20:01:54 pause-138480 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 20:01:54 pause-138480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 20:01:54 pause-138480 systemd[1]: kubelet.service: Consumed 1.237s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-138480 -n pause-138480
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-138480 -n pause-138480: exit status 2 (321.220467ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-138480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-217464 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-217464 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (262.625837ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:07:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-217464 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-217464 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-217464 describe deploy/metrics-server -n kube-system: exit status 1 (61.198154ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-217464 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-217464
helpers_test.go:243: (dbg) docker inspect old-k8s-version-217464:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9",
	        "Created": "2025-12-01T20:06:33.460541938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319592,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:06:33.502350629Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9/hostname",
	        "HostsPath": "/var/lib/docker/containers/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9/hosts",
	        "LogPath": "/var/lib/docker/containers/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9-json.log",
	        "Name": "/old-k8s-version-217464",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-217464:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-217464",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9",
	                "LowerDir": "/var/lib/docker/overlay2/859c98cc16a359e96cd78ec206ea4e499d484fbb745b17501e76564dce7c678d-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/859c98cc16a359e96cd78ec206ea4e499d484fbb745b17501e76564dce7c678d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/859c98cc16a359e96cd78ec206ea4e499d484fbb745b17501e76564dce7c678d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/859c98cc16a359e96cd78ec206ea4e499d484fbb745b17501e76564dce7c678d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-217464",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-217464/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-217464",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-217464",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-217464",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "765f9974a2fd4d1f8b455be27dd5395b8e05eace5e2c6dbd1a3b919108f7b4da",
	            "SandboxKey": "/var/run/docker/netns/765f9974a2fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-217464": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1d25a2ef13e6f84d7dee9dd1a8ffb7c5ebd5713411470cffa733d6c3a1a597a",
	                    "EndpointID": "fd5118b393582059b1eeac322a88ff3c1e3a7e15c0f3a420f4bd038867decccb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "02:d2:d3:a8:e8:3a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-217464",
	                        "e59219b4cc96"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-217464 -n old-k8s-version-217464
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-217464 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-217464 logs -n 25: (1.224568828s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-551864 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                 │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo cat /etc/kubernetes/kubelet.conf                                                                                                │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo cat /var/lib/kubelet/config.yaml                                                                                                │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo systemctl status docker --all --full --no-pager                                                                                 │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p flannel-551864 sudo systemctl cat docker --no-pager                                                                                                 │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo cat /etc/docker/daemon.json                                                                                                     │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p flannel-551864 sudo docker system info                                                                                                              │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p flannel-551864 sudo systemctl status cri-docker --all --full --no-pager                                                                             │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p flannel-551864 sudo systemctl cat cri-docker --no-pager                                                                                             │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                        │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p flannel-551864 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                  │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 pgrep -a kubelet                                                                                                                      │ bridge-551864          │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo cri-dockerd --version                                                                                                           │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo systemctl status containerd --all --full --no-pager                                                                             │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p flannel-551864 sudo systemctl cat containerd --no-pager                                                                                             │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo cat /lib/systemd/system/containerd.service                                                                                      │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo cat /etc/containerd/config.toml                                                                                                 │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo containerd config dump                                                                                                          │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo systemctl status crio --all --full --no-pager                                                                                   │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo systemctl cat crio --no-pager                                                                                                   │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                         │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p flannel-551864 sudo crio config                                                                                                                     │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p flannel-551864                                                                                                                                      │ flannel-551864         │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ embed-certs-990820     │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-217464 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-217464 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:07:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:07:21.493359  335220 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:07:21.493677  335220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:07:21.493689  335220 out.go:374] Setting ErrFile to fd 2...
	I1201 20:07:21.493696  335220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:07:21.493885  335220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:07:21.494363  335220 out.go:368] Setting JSON to false
	I1201 20:07:21.495578  335220 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6592,"bootTime":1764613049,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:07:21.495654  335220 start.go:143] virtualization: kvm guest
	I1201 20:07:21.497775  335220 out.go:179] * [embed-certs-990820] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:07:21.499323  335220 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:07:21.499323  335220 notify.go:221] Checking for updates...
	I1201 20:07:21.501086  335220 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:07:21.502405  335220 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:07:21.503598  335220 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:07:21.507764  335220 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:07:21.508916  335220 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:07:21.510712  335220 config.go:182] Loaded profile config "bridge-551864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:07:21.510876  335220 config.go:182] Loaded profile config "no-preload-240359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:07:21.510989  335220 config.go:182] Loaded profile config "old-k8s-version-217464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1201 20:07:21.511129  335220 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:07:21.537439  335220 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:07:21.537510  335220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:07:21.599380  335220 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-01 20:07:21.589133245 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:07:21.599515  335220 docker.go:319] overlay module found
	I1201 20:07:21.601513  335220 out.go:179] * Using the docker driver based on user configuration
	I1201 20:07:21.602788  335220 start.go:309] selected driver: docker
	I1201 20:07:21.602803  335220 start.go:927] validating driver "docker" against <nil>
	I1201 20:07:21.602813  335220 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:07:21.603396  335220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:07:21.661961  335220 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-01 20:07:21.652154201 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:07:21.662176  335220 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 20:07:21.662450  335220 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:07:21.664098  335220 out.go:179] * Using Docker driver with root privileges
	I1201 20:07:21.665261  335220 cni.go:84] Creating CNI manager for ""
	I1201 20:07:21.665346  335220 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:07:21.665358  335220 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1201 20:07:21.665437  335220 start.go:353] cluster config:
	{Name:embed-certs-990820 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-990820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:07:21.666695  335220 out.go:179] * Starting "embed-certs-990820" primary control-plane node in "embed-certs-990820" cluster
	I1201 20:07:21.667805  335220 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:07:21.668990  335220 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:07:21.670155  335220 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:07:21.670186  335220 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 20:07:21.670208  335220 cache.go:65] Caching tarball of preloaded images
	I1201 20:07:21.670252  335220 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:07:21.670317  335220 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:07:21.670339  335220 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:07:21.670447  335220 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/config.json ...
	I1201 20:07:21.670471  335220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/config.json: {Name:mk2e874f3018bab0298c4f09043470e40177c795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:21.692771  335220 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:07:21.692794  335220 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:07:21.692816  335220 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:07:21.692851  335220 start.go:360] acquireMachinesLock for embed-certs-990820: {Name:mk0308557d4346623fb3193dcae3b8f2c186483d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:07:21.692957  335220 start.go:364] duration metric: took 89.289µs to acquireMachinesLock for "embed-certs-990820"
	I1201 20:07:21.692984  335220 start.go:93] Provisioning new machine with config: &{Name:embed-certs-990820 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-990820 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:07:21.693079  335220 start.go:125] createHost starting for "" (driver="docker")
	I1201 20:07:20.682208  327969 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.651747608s)
	I1201 20:07:20.682245  327969 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1201 20:07:20.682274  327969 cache_images.go:125] Successfully loaded all cached images
	I1201 20:07:20.682279  327969 cache_images.go:94] duration metric: took 11.100572653s to LoadCachedImages
	I1201 20:07:20.682359  327969 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:07:20.682477  327969 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-240359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-240359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:07:20.682567  327969 ssh_runner.go:195] Run: crio config
	I1201 20:07:20.741976  327969 cni.go:84] Creating CNI manager for ""
	I1201 20:07:20.742004  327969 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:07:20.742024  327969 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:07:20.742053  327969 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-240359 NodeName:no-preload-240359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:07:20.742229  327969 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-240359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:07:20.742342  327969 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:07:20.752016  327969 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1201 20:07:20.752100  327969 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:07:20.761078  327969 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1201 20:07:20.761113  327969 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1201 20:07:20.761113  327969 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1201 20:07:20.761176  327969 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1201 20:07:20.761202  327969 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1201 20:07:20.761179  327969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:07:20.767381  327969 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1201 20:07:20.767417  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1201 20:07:20.779521  327969 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1201 20:07:20.779555  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1201 20:07:20.779534  327969 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1201 20:07:20.799032  327969 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1201 20:07:20.799069  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1201 20:07:21.265022  327969 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:07:21.274222  327969 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:07:21.304860  327969 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:07:21.327313  327969 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1201 20:07:21.341173  327969 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:07:21.345476  327969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:07:21.356021  327969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:07:21.444065  327969 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:07:21.473861  327969 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359 for IP: 192.168.85.2
	I1201 20:07:21.473884  327969 certs.go:195] generating shared ca certs ...
	I1201 20:07:21.473904  327969 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:21.474061  327969 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:07:21.474121  327969 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:07:21.474135  327969 certs.go:257] generating profile certs ...
	I1201 20:07:21.474207  327969 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/client.key
	I1201 20:07:21.474226  327969 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/client.crt with IP's: []
	I1201 20:07:21.549660  327969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/client.crt ...
	I1201 20:07:21.549721  327969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/client.crt: {Name:mka0300cbe5c3539f67a4469d91591c53536927f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:21.549903  327969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/client.key ...
	I1201 20:07:21.549917  327969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/client.key: {Name:mk0d3143ce508665b443a6ea5861af487b7db555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:21.550019  327969 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.key.e236d75c
	I1201 20:07:21.550041  327969 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.crt.e236d75c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1201 20:07:21.712369  327969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.crt.e236d75c ...
	I1201 20:07:21.712391  327969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.crt.e236d75c: {Name:mkc7bbdf2b6efff5f8720c84f7c0956720fd337f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:21.712575  327969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.key.e236d75c ...
	I1201 20:07:21.712592  327969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.key.e236d75c: {Name:mk82585fae98ceaafd71482315a8c691c0f335d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:21.712695  327969 certs.go:382] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.crt.e236d75c -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.crt
	I1201 20:07:21.712797  327969 certs.go:386] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.key.e236d75c -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.key
	I1201 20:07:21.712889  327969 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.key
	I1201 20:07:21.712912  327969 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.crt with IP's: []
	I1201 20:07:21.753467  327969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.crt ...
	I1201 20:07:21.753489  327969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.crt: {Name:mked3ae4fa6f5dda79ca65f57e9bfa635e1977c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:21.753639  327969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.key ...
	I1201 20:07:21.753655  327969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.key: {Name:mk65cda959cf6acecba4d157d90af69be457c0af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:21.753885  327969 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:07:21.753938  327969 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:07:21.753954  327969 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:07:21.753995  327969 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:07:21.754037  327969 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:07:21.754084  327969 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:07:21.754141  327969 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:07:21.754810  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:07:21.776838  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:07:21.798563  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:07:21.819594  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:07:21.839876  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:07:21.859426  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:07:21.879410  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:07:21.898749  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:07:21.916826  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:07:21.960577  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:07:21.979148  327969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:07:21.997109  327969 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:07:22.010995  327969 ssh_runner.go:195] Run: openssl version
	I1201 20:07:22.018231  327969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:07:22.088458  327969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:07:22.093614  327969 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:07:22.093666  327969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:07:22.130960  327969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:07:22.141028  327969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:07:22.150625  327969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:22.155443  327969 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:22.155509  327969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:22.192032  327969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:07:22.202716  327969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:07:22.212264  327969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:07:22.216783  327969 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:07:22.216838  327969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:07:22.252214  327969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:07:22.262011  327969 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:07:22.266524  327969 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 20:07:22.266579  327969 kubeadm.go:401] StartCluster: {Name:no-preload-240359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-240359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:07:22.266664  327969 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:07:22.266699  327969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:07:22.294267  327969 cri.go:89] found id: ""
	I1201 20:07:22.294355  327969 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:07:22.303548  327969 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:07:22.313223  327969 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 20:07:22.313282  327969 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:07:22.322987  327969 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 20:07:22.323009  327969 kubeadm.go:158] found existing configuration files:
	
	I1201 20:07:22.323056  327969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 20:07:22.332692  327969 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 20:07:22.332738  327969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 20:07:22.341397  327969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 20:07:22.350319  327969 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 20:07:22.350376  327969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:07:22.358336  327969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 20:07:22.366543  327969 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 20:07:22.366607  327969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:07:22.374814  327969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 20:07:22.383160  327969 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 20:07:22.383218  327969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:07:22.391388  327969 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 20:07:22.432599  327969 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 20:07:22.432667  327969 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 20:07:22.509981  327969 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 20:07:22.510078  327969 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1201 20:07:22.510137  327969 kubeadm.go:319] OS: Linux
	I1201 20:07:22.510198  327969 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 20:07:22.510254  327969 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 20:07:22.510333  327969 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 20:07:22.510398  327969 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 20:07:22.510460  327969 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 20:07:22.510524  327969 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 20:07:22.510587  327969 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 20:07:22.510648  327969 kubeadm.go:319] CGROUPS_IO: enabled
	I1201 20:07:22.581090  327969 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 20:07:22.581244  327969 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 20:07:22.581416  327969 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 20:07:22.596602  327969 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 20:07:22.598804  327969 out.go:252]   - Generating certificates and keys ...
	I1201 20:07:22.598928  327969 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 20:07:22.599031  327969 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 20:07:22.694448  327969 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 20:07:22.822548  327969 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 20:07:22.852415  327969 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 20:07:23.010378  327969 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 20:07:23.103801  327969 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 20:07:23.104003  327969 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-240359] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1201 20:07:23.241849  327969 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 20:07:23.242147  327969 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-240359] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1201 20:07:23.319373  327969 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 20:07:23.495431  327969 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 20:07:23.625802  327969 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 20:07:23.626001  327969 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 20:07:23.669434  327969 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 20:07:23.790768  327969 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 20:07:23.851231  327969 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 20:07:23.935092  327969 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 20:07:23.977218  327969 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 20:07:23.977880  327969 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 20:07:23.983637  327969 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 20:07:23.986393  327969 out.go:252]   - Booting up control plane ...
	I1201 20:07:23.986532  327969 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 20:07:23.986652  327969 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 20:07:23.986753  327969 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 20:07:24.000126  327969 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 20:07:24.000277  327969 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 20:07:24.008022  327969 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 20:07:24.008265  327969 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 20:07:24.008374  327969 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 20:07:24.123377  327969 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 20:07:24.123578  327969 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 20:07:24.625017  327969 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.848539ms
	I1201 20:07:24.628017  327969 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 20:07:24.628154  327969 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1201 20:07:24.628283  327969 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 20:07:24.628419  327969 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 20:07:21.694971  335220 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1201 20:07:21.695206  335220 start.go:159] libmachine.API.Create for "embed-certs-990820" (driver="docker")
	I1201 20:07:21.695238  335220 client.go:173] LocalClient.Create starting
	I1201 20:07:21.695309  335220 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem
	I1201 20:07:21.695340  335220 main.go:143] libmachine: Decoding PEM data...
	I1201 20:07:21.695360  335220 main.go:143] libmachine: Parsing certificate...
	I1201 20:07:21.695408  335220 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem
	I1201 20:07:21.695430  335220 main.go:143] libmachine: Decoding PEM data...
	I1201 20:07:21.695442  335220 main.go:143] libmachine: Parsing certificate...
	I1201 20:07:21.695734  335220 cli_runner.go:164] Run: docker network inspect embed-certs-990820 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1201 20:07:21.713602  335220 cli_runner.go:211] docker network inspect embed-certs-990820 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1201 20:07:21.713676  335220 network_create.go:284] running [docker network inspect embed-certs-990820] to gather additional debugging logs...
	I1201 20:07:21.713696  335220 cli_runner.go:164] Run: docker network inspect embed-certs-990820
	W1201 20:07:21.730927  335220 cli_runner.go:211] docker network inspect embed-certs-990820 returned with exit code 1
	I1201 20:07:21.730966  335220 network_create.go:287] error running [docker network inspect embed-certs-990820]: docker network inspect embed-certs-990820: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-990820 not found
	I1201 20:07:21.730994  335220 network_create.go:289] output of [docker network inspect embed-certs-990820]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-990820 not found
	
	** /stderr **
	I1201 20:07:21.731121  335220 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:07:21.749390  335220 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-76afd0f6296c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:9d:28:e3:43:67} reservation:<nil>}
	I1201 20:07:21.750110  335220 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c4bf70fcc880 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8e:52:fe:f5:21:25} reservation:<nil>}
	I1201 20:07:21.750897  335220 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c1c516f0fb5f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:66:fb:c8:c0:29} reservation:<nil>}
	I1201 20:07:21.751573  335220 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c1d25a2ef13e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:36:30:7f:c5:43:af} reservation:<nil>}
	I1201 20:07:21.752244  335220 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9442b61c8947 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:a3:7c:b6:3b:49} reservation:<nil>}
	I1201 20:07:21.753124  335220 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f24290}
	I1201 20:07:21.753158  335220 network_create.go:124] attempt to create docker network embed-certs-990820 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1201 20:07:21.753227  335220 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-990820 embed-certs-990820
	I1201 20:07:21.806347  335220 network_create.go:108] docker network embed-certs-990820 192.168.94.0/24 created
	I1201 20:07:21.806377  335220 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-990820" container
	I1201 20:07:21.806428  335220 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1201 20:07:21.825086  335220 cli_runner.go:164] Run: docker volume create embed-certs-990820 --label name.minikube.sigs.k8s.io=embed-certs-990820 --label created_by.minikube.sigs.k8s.io=true
	I1201 20:07:21.844746  335220 oci.go:103] Successfully created a docker volume embed-certs-990820
	I1201 20:07:21.844838  335220 cli_runner.go:164] Run: docker run --rm --name embed-certs-990820-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-990820 --entrypoint /usr/bin/test -v embed-certs-990820:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1201 20:07:22.778542  335220 oci.go:107] Successfully prepared a docker volume embed-certs-990820
	I1201 20:07:22.778610  335220 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:07:22.778621  335220 kic.go:194] Starting extracting preloaded images to volume ...
	I1201 20:07:22.778679  335220 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-990820:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1201 20:07:25.632682  327969 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004497079s
	I1201 20:07:26.752728  327969 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.124572575s
	I1201 20:07:28.630333  327969 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002163713s
	I1201 20:07:28.654532  327969 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1201 20:07:28.665878  327969 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1201 20:07:28.681057  327969 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1201 20:07:28.681494  327969 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-240359 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1201 20:07:28.700441  327969 kubeadm.go:319] [bootstrap-token] Using token: uxflgn.sgfh8oh4t7i7hdu5
	
	
	==> CRI-O <==
	Dec 01 20:07:15 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:15.874261739Z" level=info msg="Starting container: 568edf6bd6c5c4d5f979e981a025c7d6061aab8e50994cbe506a50e511a46ad8" id=a492d60a-be8d-42eb-85f5-c52a3f9bfe99 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:07:15 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:15.876732502Z" level=info msg="Started container" PID=2146 containerID=568edf6bd6c5c4d5f979e981a025c7d6061aab8e50994cbe506a50e511a46ad8 description=kube-system/coredns-5dd5756b68-jpv6h/coredns id=a492d60a-be8d-42eb-85f5-c52a3f9bfe99 name=/runtime.v1.RuntimeService/StartContainer sandboxID=464fddeb502445baa0c2ad2496f4ba5fefb576e42aa87c47385a03bb1a929899
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.409136356Z" level=info msg="Running pod sandbox: default/busybox/POD" id=abb98ed3-3030-4a4f-813e-235394723188 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.409253106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.414831248Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1c55de84a658cac982bc9d627c038d3c4e290d38b938b73d21a7286915f6c49d UID:37d188bf-79e8-4b6f-bbfd-3889f55ecfbd NetNS:/var/run/netns/1103a6f1-a7e7-48f0-9d0c-1ef3121f1d7c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00076c678}] Aliases:map[]}"
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.414863654Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.426142839Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1c55de84a658cac982bc9d627c038d3c4e290d38b938b73d21a7286915f6c49d UID:37d188bf-79e8-4b6f-bbfd-3889f55ecfbd NetNS:/var/run/netns/1103a6f1-a7e7-48f0-9d0c-1ef3121f1d7c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00076c678}] Aliases:map[]}"
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.42634531Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.427303106Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.4284995Z" level=info msg="Ran pod sandbox 1c55de84a658cac982bc9d627c038d3c4e290d38b938b73d21a7286915f6c49d with infra container: default/busybox/POD" id=abb98ed3-3030-4a4f-813e-235394723188 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.429775062Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4f0d77e1-fb58-4110-a5b4-269cef94779f name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.429898343Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4f0d77e1-fb58-4110-a5b4-269cef94779f name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.429946089Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4f0d77e1-fb58-4110-a5b4-269cef94779f name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.430548992Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8c71cbcd-a48a-49ce-82e3-80a954ecb36f name=/runtime.v1.ImageService/PullImage
	Dec 01 20:07:18 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:18.432067052Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 01 20:07:20 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:20.694824339Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=8c71cbcd-a48a-49ce-82e3-80a954ecb36f name=/runtime.v1.ImageService/PullImage
	Dec 01 20:07:20 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:20.695847469Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3b03d12a-9325-4238-904e-bcb65afe5a44 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:07:20 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:20.697607478Z" level=info msg="Creating container: default/busybox/busybox" id=44d80cd0-4e94-472f-992c-c37f84d24123 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:07:20 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:20.697744549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:07:20 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:20.702146149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:07:20 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:20.702621739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:07:20 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:20.727783338Z" level=info msg="Created container 46fd90657ed1a53f8aea16a9ffa1eeb82b635725097f4a12372b32efd12bd3de: default/busybox/busybox" id=44d80cd0-4e94-472f-992c-c37f84d24123 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:07:20 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:20.728564874Z" level=info msg="Starting container: 46fd90657ed1a53f8aea16a9ffa1eeb82b635725097f4a12372b32efd12bd3de" id=04982a79-410f-4f99-a6a8-e41a595c969a name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:07:20 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:20.730348922Z" level=info msg="Started container" PID=2221 containerID=46fd90657ed1a53f8aea16a9ffa1eeb82b635725097f4a12372b32efd12bd3de description=default/busybox/busybox id=04982a79-410f-4f99-a6a8-e41a595c969a name=/runtime.v1.RuntimeService/StartContainer sandboxID=1c55de84a658cac982bc9d627c038d3c4e290d38b938b73d21a7286915f6c49d
	Dec 01 20:07:28 old-k8s-version-217464 crio[783]: time="2025-12-01T20:07:28.210171179Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	46fd90657ed1a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   1c55de84a658c       busybox                                          default
	568edf6bd6c5c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   464fddeb50244       coredns-5dd5756b68-jpv6h                         kube-system
	9edce7ead979d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   9feb989a38ad4       storage-provisioner                              kube-system
	00d138dae40fc       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   0749a02dc5ddf       kindnet-x9tkl                                    kube-system
	a581ccb96166b       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   132c09788fa3a       kube-proxy-fjhhh                                 kube-system
	1a1c0597ef98d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   9af4e7bfdb07b       etcd-old-k8s-version-217464                      kube-system
	c80d5b0d8cb78       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   e16e436c905a3       kube-controller-manager-old-k8s-version-217464   kube-system
	091b6b7fcf925       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   68a931de7e53d       kube-apiserver-old-k8s-version-217464            kube-system
	9d3ec6d855b55       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   3962010d9aa3c       kube-scheduler-old-k8s-version-217464            kube-system
	
	
	==> coredns [568edf6bd6c5c4d5f979e981a025c7d6061aab8e50994cbe506a50e511a46ad8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47851 - 59881 "HINFO IN 6599316132634111952.5605674262990456391. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02259124s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-217464
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-217464
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=old-k8s-version-217464
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_06_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:06:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-217464
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:07:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:07:19 +0000   Mon, 01 Dec 2025 20:06:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:07:19 +0000   Mon, 01 Dec 2025 20:06:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:07:19 +0000   Mon, 01 Dec 2025 20:06:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:07:19 +0000   Mon, 01 Dec 2025 20:07:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-217464
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                ed847e9c-b6d4-4f47-a0ed-41ae4070a3c6
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-jpv6h                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-217464                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-x9tkl                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-217464             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-217464    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-fjhhh                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-217464             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  45s (x8 over 46s)  kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 46s)  kubelet          Node old-k8s-version-217464 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x8 over 46s)  kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-217464 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-217464 event: Registered Node old-k8s-version-217464 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-217464 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [1a1c0597ef98d45a6bb7ef0241ee5517b6f5079432be4ffff9464e0c66b9ce77] <==
	{"level":"info","ts":"2025-12-01T20:06:44.525116Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-01T20:06:44.525156Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-01T20:06:44.52527Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-01T20:06:44.525337Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-01T20:06:44.916348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-01T20:06:44.916393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-01T20:06:44.916432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-01T20:06:44.916447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-01T20:06:44.916459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-01T20:06:44.916476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-01T20:06:44.916488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-01T20:06:44.917324Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-01T20:06:44.917805Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-01T20:06:44.917802Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-217464 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-01T20:06:44.917823Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-01T20:06:44.91818Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-01T20:06:44.918312Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-01T20:06:44.917959Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-01T20:06:44.918884Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-01T20:06:44.918959Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-01T20:06:44.920184Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-01T20:06:44.920269Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-01T20:07:19.925911Z","caller":"traceutil/trace.go:171","msg":"trace[1847870480] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"206.530433ms","start":"2025-12-01T20:07:19.719357Z","end":"2025-12-01T20:07:19.925887Z","steps":["trace[1847870480] 'process raft request'  (duration: 206.388636ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-01T20:07:20.083729Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.403462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1122"}
	{"level":"info","ts":"2025-12-01T20:07:20.083825Z","caller":"traceutil/trace.go:171","msg":"trace[502024324] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:459; }","duration":"144.541458ms","start":"2025-12-01T20:07:19.939264Z","end":"2025-12-01T20:07:20.083806Z","steps":["trace[502024324] 'range keys from in-memory index tree'  (duration: 144.25183ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:07:29 up  1:50,  0 user,  load average: 4.35, 3.18, 2.20
	Linux old-k8s-version-217464 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [00d138dae40fc4a41b69593f3e9f419aaae61d5d3344506d049f01d3da459481] <==
	I1201 20:07:04.720048       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:07:04.720330       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1201 20:07:04.720469       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:07:04.720484       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:07:04.720508       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:07:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:07:05.018611       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:07:05.018645       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:07:05.018660       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:07:05.019569       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:07:05.318909       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:07:05.318939       1 metrics.go:72] Registering metrics
	I1201 20:07:05.319022       1 controller.go:711] "Syncing nftables rules"
	I1201 20:07:15.024193       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1201 20:07:15.024272       1 main.go:301] handling current node
	I1201 20:07:25.020283       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1201 20:07:25.020355       1 main.go:301] handling current node
	
	
	==> kube-apiserver [091b6b7fcf9254ab4af950c5cb5b5463f3ef87d87d83271245660a91cd1a1702] <==
	I1201 20:06:46.076544       1 shared_informer.go:318] Caches are synced for configmaps
	I1201 20:06:46.076663       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1201 20:06:46.076737       1 aggregator.go:166] initial CRD sync complete...
	I1201 20:06:46.076754       1 autoregister_controller.go:141] Starting autoregister controller
	I1201 20:06:46.076761       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:06:46.076769       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:06:46.076780       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1201 20:06:46.077144       1 controller.go:624] quota admission added evaluator for: namespaces
	I1201 20:06:46.114380       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:06:46.116815       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1201 20:06:46.979843       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1201 20:06:46.983196       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1201 20:06:46.983211       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:06:47.356470       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:06:47.388692       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:06:47.484955       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1201 20:06:47.489818       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1201 20:06:47.490826       1 controller.go:624] quota admission added evaluator for: endpoints
	I1201 20:06:47.494640       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:06:48.352805       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1201 20:06:49.009509       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1201 20:06:49.021131       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1201 20:06:49.031989       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1201 20:07:01.912085       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1201 20:07:02.064732       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [c80d5b0d8cb785a76ebda8245780ce7764667987104be966aa22cecb66bcb857] <==
	I1201 20:07:01.406505       1 event.go:307] "Event occurred" object="old-k8s-version-217464" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-217464 event: Registered Node old-k8s-version-217464 in Controller"
	I1201 20:07:01.408469       1 shared_informer.go:318] Caches are synced for TTL
	I1201 20:07:01.414197       1 event.go:307] "Event occurred" object="kube-system/etcd-old-k8s-version-217464" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1201 20:07:01.415528       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-217464" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1201 20:07:01.416051       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-217464" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1201 20:07:01.729858       1 shared_informer.go:318] Caches are synced for garbage collector
	I1201 20:07:01.751585       1 shared_informer.go:318] Caches are synced for garbage collector
	I1201 20:07:01.751629       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1201 20:07:01.917057       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1201 20:07:02.080263       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-x9tkl"
	I1201 20:07:02.081846       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fjhhh"
	I1201 20:07:02.163024       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1201 20:07:02.214465       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-l9nd2"
	I1201 20:07:02.223868       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jpv6h"
	I1201 20:07:02.241065       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="324.93822ms"
	I1201 20:07:02.247841       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-l9nd2"
	I1201 20:07:02.258498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.366306ms"
	I1201 20:07:02.264158       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.611456ms"
	I1201 20:07:02.264249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.117µs"
	I1201 20:07:15.383688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.413µs"
	I1201 20:07:15.408336       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.443µs"
	I1201 20:07:16.191668       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.378µs"
	I1201 20:07:16.230489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.907215ms"
	I1201 20:07:16.230671       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.114µs"
	I1201 20:07:16.408045       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [a581ccb96166b7eab027d866785364d31222e75647f90117ca1d2417eb681a30] <==
	I1201 20:07:02.479563       1 server_others.go:69] "Using iptables proxy"
	I1201 20:07:02.489202       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1201 20:07:02.508862       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:07:02.511525       1 server_others.go:152] "Using iptables Proxier"
	I1201 20:07:02.511568       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1201 20:07:02.511579       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1201 20:07:02.511624       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1201 20:07:02.511889       1 server.go:846] "Version info" version="v1.28.0"
	I1201 20:07:02.511908       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:07:02.513086       1 config.go:97] "Starting endpoint slice config controller"
	I1201 20:07:02.513117       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1201 20:07:02.513091       1 config.go:188] "Starting service config controller"
	I1201 20:07:02.513692       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1201 20:07:02.513897       1 config.go:315] "Starting node config controller"
	I1201 20:07:02.513916       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1201 20:07:02.613689       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1201 20:07:02.614882       1 shared_informer.go:318] Caches are synced for service config
	I1201 20:07:02.614933       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9d3ec6d855b553fd0099b4797da7a7bd20f2721bf99ff44127a45655b4a8d4cb] <==
	W1201 20:06:46.049180       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1201 20:06:46.049201       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1201 20:06:46.049840       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1201 20:06:46.050048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1201 20:06:46.051134       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1201 20:06:46.051196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1201 20:06:46.051147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1201 20:06:46.051309       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1201 20:06:46.051327       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1201 20:06:46.051400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1201 20:06:46.051417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1201 20:06:46.051513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1201 20:06:46.051156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1201 20:06:46.051591       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1201 20:06:46.052040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1201 20:06:46.052070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1201 20:06:46.052594       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1201 20:06:46.052620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1201 20:06:46.873793       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1201 20:06:46.873825       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1201 20:06:47.195762       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1201 20:06:47.195811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1201 20:06:47.203329       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1201 20:06:47.203369       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1201 20:06:49.835460       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 01 20:07:01 old-k8s-version-217464 kubelet[1400]: I1201 20:07:01.456738    1400 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 01 20:07:01 old-k8s-version-217464 kubelet[1400]: I1201 20:07:01.457723    1400 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 01 20:07:02 old-k8s-version-217464 kubelet[1400]: I1201 20:07:02.088354    1400 topology_manager.go:215] "Topology Admit Handler" podUID="baa3c072-c4e8-4d7c-ad9f-7ee7461ea900" podNamespace="kube-system" podName="kindnet-x9tkl"
	Dec 01 20:07:02 old-k8s-version-217464 kubelet[1400]: I1201 20:07:02.091847    1400 topology_manager.go:215] "Topology Admit Handler" podUID="12564231-f1d8-4991-b32e-478ee1e61837" podNamespace="kube-system" podName="kube-proxy-fjhhh"
	Dec 01 20:07:02 old-k8s-version-217464 kubelet[1400]: I1201 20:07:02.158213    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/baa3c072-c4e8-4d7c-ad9f-7ee7461ea900-cni-cfg\") pod \"kindnet-x9tkl\" (UID: \"baa3c072-c4e8-4d7c-ad9f-7ee7461ea900\") " pod="kube-system/kindnet-x9tkl"
	Dec 01 20:07:02 old-k8s-version-217464 kubelet[1400]: I1201 20:07:02.158491    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baa3c072-c4e8-4d7c-ad9f-7ee7461ea900-lib-modules\") pod \"kindnet-x9tkl\" (UID: \"baa3c072-c4e8-4d7c-ad9f-7ee7461ea900\") " pod="kube-system/kindnet-x9tkl"
	Dec 01 20:07:02 old-k8s-version-217464 kubelet[1400]: I1201 20:07:02.158537    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/12564231-f1d8-4991-b32e-478ee1e61837-kube-proxy\") pod \"kube-proxy-fjhhh\" (UID: \"12564231-f1d8-4991-b32e-478ee1e61837\") " pod="kube-system/kube-proxy-fjhhh"
	Dec 01 20:07:02 old-k8s-version-217464 kubelet[1400]: I1201 20:07:02.158565    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12564231-f1d8-4991-b32e-478ee1e61837-xtables-lock\") pod \"kube-proxy-fjhhh\" (UID: \"12564231-f1d8-4991-b32e-478ee1e61837\") " pod="kube-system/kube-proxy-fjhhh"
	Dec 01 20:07:02 old-k8s-version-217464 kubelet[1400]: I1201 20:07:02.158607    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2cvf\" (UniqueName: \"kubernetes.io/projected/12564231-f1d8-4991-b32e-478ee1e61837-kube-api-access-q2cvf\") pod \"kube-proxy-fjhhh\" (UID: \"12564231-f1d8-4991-b32e-478ee1e61837\") " pod="kube-system/kube-proxy-fjhhh"
	Dec 01 20:07:02 old-k8s-version-217464 kubelet[1400]: I1201 20:07:02.158650    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baa3c072-c4e8-4d7c-ad9f-7ee7461ea900-xtables-lock\") pod \"kindnet-x9tkl\" (UID: \"baa3c072-c4e8-4d7c-ad9f-7ee7461ea900\") " pod="kube-system/kindnet-x9tkl"
	Dec 01 20:07:02 old-k8s-version-217464 kubelet[1400]: I1201 20:07:02.158678    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smrgm\" (UniqueName: \"kubernetes.io/projected/baa3c072-c4e8-4d7c-ad9f-7ee7461ea900-kube-api-access-smrgm\") pod \"kindnet-x9tkl\" (UID: \"baa3c072-c4e8-4d7c-ad9f-7ee7461ea900\") " pod="kube-system/kindnet-x9tkl"
	Dec 01 20:07:02 old-k8s-version-217464 kubelet[1400]: I1201 20:07:02.158712    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12564231-f1d8-4991-b32e-478ee1e61837-lib-modules\") pod \"kube-proxy-fjhhh\" (UID: \"12564231-f1d8-4991-b32e-478ee1e61837\") " pod="kube-system/kube-proxy-fjhhh"
	Dec 01 20:07:03 old-k8s-version-217464 kubelet[1400]: I1201 20:07:03.155660    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fjhhh" podStartSLOduration=1.15561047 podCreationTimestamp="2025-12-01 20:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:07:03.155320697 +0000 UTC m=+14.186071246" watchObservedRunningTime="2025-12-01 20:07:03.15561047 +0000 UTC m=+14.186361020"
	Dec 01 20:07:05 old-k8s-version-217464 kubelet[1400]: I1201 20:07:05.159706    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-x9tkl" podStartSLOduration=1.083253362 podCreationTimestamp="2025-12-01 20:07:02 +0000 UTC" firstStartedPulling="2025-12-01 20:07:02.398827084 +0000 UTC m=+13.429577625" lastFinishedPulling="2025-12-01 20:07:04.475221207 +0000 UTC m=+15.505971749" observedRunningTime="2025-12-01 20:07:05.15952736 +0000 UTC m=+16.190277956" watchObservedRunningTime="2025-12-01 20:07:05.159647486 +0000 UTC m=+16.190398035"
	Dec 01 20:07:15 old-k8s-version-217464 kubelet[1400]: I1201 20:07:15.345169    1400 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 01 20:07:15 old-k8s-version-217464 kubelet[1400]: I1201 20:07:15.379847    1400 topology_manager.go:215] "Topology Admit Handler" podUID="dd6ba6d5-6040-4b65-81b7-b77a7f52ccc2" podNamespace="kube-system" podName="storage-provisioner"
	Dec 01 20:07:15 old-k8s-version-217464 kubelet[1400]: I1201 20:07:15.383635    1400 topology_manager.go:215] "Topology Admit Handler" podUID="06a54ff5-5ae8-4a69-898c-003502faf17d" podNamespace="kube-system" podName="coredns-5dd5756b68-jpv6h"
	Dec 01 20:07:15 old-k8s-version-217464 kubelet[1400]: I1201 20:07:15.452687    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj9tx\" (UniqueName: \"kubernetes.io/projected/06a54ff5-5ae8-4a69-898c-003502faf17d-kube-api-access-hj9tx\") pod \"coredns-5dd5756b68-jpv6h\" (UID: \"06a54ff5-5ae8-4a69-898c-003502faf17d\") " pod="kube-system/coredns-5dd5756b68-jpv6h"
	Dec 01 20:07:15 old-k8s-version-217464 kubelet[1400]: I1201 20:07:15.452756    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dd6ba6d5-6040-4b65-81b7-b77a7f52ccc2-tmp\") pod \"storage-provisioner\" (UID: \"dd6ba6d5-6040-4b65-81b7-b77a7f52ccc2\") " pod="kube-system/storage-provisioner"
	Dec 01 20:07:15 old-k8s-version-217464 kubelet[1400]: I1201 20:07:15.452790    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06a54ff5-5ae8-4a69-898c-003502faf17d-config-volume\") pod \"coredns-5dd5756b68-jpv6h\" (UID: \"06a54ff5-5ae8-4a69-898c-003502faf17d\") " pod="kube-system/coredns-5dd5756b68-jpv6h"
	Dec 01 20:07:15 old-k8s-version-217464 kubelet[1400]: I1201 20:07:15.452822    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txzn7\" (UniqueName: \"kubernetes.io/projected/dd6ba6d5-6040-4b65-81b7-b77a7f52ccc2-kube-api-access-txzn7\") pod \"storage-provisioner\" (UID: \"dd6ba6d5-6040-4b65-81b7-b77a7f52ccc2\") " pod="kube-system/storage-provisioner"
	Dec 01 20:07:16 old-k8s-version-217464 kubelet[1400]: I1201 20:07:16.208437    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.208377808 podCreationTimestamp="2025-12-01 20:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:07:16.206075685 +0000 UTC m=+27.236826234" watchObservedRunningTime="2025-12-01 20:07:16.208377808 +0000 UTC m=+27.239128357"
	Dec 01 20:07:16 old-k8s-version-217464 kubelet[1400]: I1201 20:07:16.208566    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jpv6h" podStartSLOduration=14.208538842 podCreationTimestamp="2025-12-01 20:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:07:16.191726018 +0000 UTC m=+27.222476578" watchObservedRunningTime="2025-12-01 20:07:16.208538842 +0000 UTC m=+27.239289392"
	Dec 01 20:07:18 old-k8s-version-217464 kubelet[1400]: I1201 20:07:18.106812    1400 topology_manager.go:215] "Topology Admit Handler" podUID="37d188bf-79e8-4b6f-bbfd-3889f55ecfbd" podNamespace="default" podName="busybox"
	Dec 01 20:07:18 old-k8s-version-217464 kubelet[1400]: I1201 20:07:18.171118    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rk89\" (UniqueName: \"kubernetes.io/projected/37d188bf-79e8-4b6f-bbfd-3889f55ecfbd-kube-api-access-2rk89\") pod \"busybox\" (UID: \"37d188bf-79e8-4b6f-bbfd-3889f55ecfbd\") " pod="default/busybox"
	
	
	==> storage-provisioner [9edce7ead979d495f1ed6a55ec0615f37c4f5343f3bb417fe216dc58dfcce5d2] <==
	I1201 20:07:15.890583       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1201 20:07:15.902672       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1201 20:07:15.902793       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1201 20:07:15.913483       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1201 20:07:15.914046       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-217464_f7c9038e-6a5e-4b33-8115-ce007323c938!
	I1201 20:07:15.914220       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dec30bb0-537f-419c-a245-48e1bea74724", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-217464_f7c9038e-6a5e-4b33-8115-ce007323c938 became leader
	I1201 20:07:16.015043       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-217464_f7c9038e-6a5e-4b33-8115-ce007323c938!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-217464 -n old-k8s-version-217464
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-217464 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-240359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-240359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (280.631978ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:08:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-240359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-240359 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-240359 describe deploy/metrics-server -n kube-system: exit status 1 (72.772365ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-240359 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-240359
helpers_test.go:243: (dbg) docker inspect no-preload-240359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340",
	        "Created": "2025-12-01T20:07:06.01914801Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 328696,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:07:06.05211204Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340/hostname",
	        "HostsPath": "/var/lib/docker/containers/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340/hosts",
	        "LogPath": "/var/lib/docker/containers/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340-json.log",
	        "Name": "/no-preload-240359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-240359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-240359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340",
	                "LowerDir": "/var/lib/docker/overlay2/68d93e71280e719e54de04ab62d6ba0bb0b66e6908f589a7add78b928696e04f-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68d93e71280e719e54de04ab62d6ba0bb0b66e6908f589a7add78b928696e04f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68d93e71280e719e54de04ab62d6ba0bb0b66e6908f589a7add78b928696e04f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68d93e71280e719e54de04ab62d6ba0bb0b66e6908f589a7add78b928696e04f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-240359",
	                "Source": "/var/lib/docker/volumes/no-preload-240359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-240359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-240359",
	                "name.minikube.sigs.k8s.io": "no-preload-240359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b9474897e31ce88125b1bf0863e244a7ffe09dba7fb554d5e5081f654d608b99",
	            "SandboxKey": "/var/run/docker/netns/b9474897e31c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-240359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9442b61c89479da474674c2efe3f782398fb10944284ed674aaa668317b06131",
	                    "EndpointID": "eaad0cd8331292bd351da48d0cae4bc01255dd83e7805a95305d5a6bf3e2ffba",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "1a:01:51:d3:78:88",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-240359",
	                        "52fdbf3aa5c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-240359 -n no-preload-240359
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-240359 logs -n 25
E1201 20:08:01.298856   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-240359 logs -n 25: (1.021695983s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-551864 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                        │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo docker system info                                                                                                                                                                                                      │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo containerd config dump                                                                                                                                                                                                  │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo crio config                                                                                                                                                                                                             │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p bridge-551864                                                                                                                                                                                                                              │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-003720                                                                                                                                                                                                               │ disable-driver-mounts-003720 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-217464 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p old-k8s-version-217464 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-240359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:07:49
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:07:49.303462  345040 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:07:49.303566  345040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:07:49.303571  345040 out.go:374] Setting ErrFile to fd 2...
	I1201 20:07:49.303578  345040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:07:49.303823  345040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:07:49.304415  345040 out.go:368] Setting JSON to false
	I1201 20:07:49.305845  345040 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6620,"bootTime":1764613049,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:07:49.305912  345040 start.go:143] virtualization: kvm guest
	I1201 20:07:49.307736  345040 out.go:179] * [old-k8s-version-217464] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:07:49.309102  345040 notify.go:221] Checking for updates...
	I1201 20:07:49.309128  345040 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:07:49.313638  345040 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:07:49.315081  345040 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:07:49.316313  345040 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:07:49.317405  345040 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:07:49.318653  345040 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:07:49.320352  345040 config.go:182] Loaded profile config "old-k8s-version-217464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1201 20:07:49.322337  345040 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1201 20:07:49.323580  345040 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:07:49.352531  345040 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:07:49.352623  345040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:07:49.428196  345040 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-01 20:07:49.413959728 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:07:49.428367  345040 docker.go:319] overlay module found
	I1201 20:07:49.431324  345040 out.go:179] * Using the docker driver based on existing profile
	I1201 20:07:49.432939  345040 start.go:309] selected driver: docker
	I1201 20:07:49.432959  345040 start.go:927] validating driver "docker" against &{Name:old-k8s-version-217464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-217464 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:07:49.433199  345040 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:07:49.434320  345040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:07:49.521829  345040 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-01 20:07:49.507281493 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:07:49.522190  345040 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:07:49.522223  345040 cni.go:84] Creating CNI manager for ""
	I1201 20:07:49.522309  345040 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:07:49.522359  345040 start.go:353] cluster config:
	{Name:old-k8s-version-217464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-217464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:07:49.524466  345040 out.go:179] * Starting "old-k8s-version-217464" primary control-plane node in "old-k8s-version-217464" cluster
	I1201 20:07:49.527071  345040 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:07:49.528519  345040 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:07:49.529745  345040 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1201 20:07:49.529783  345040 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1201 20:07:49.529792  345040 cache.go:65] Caching tarball of preloaded images
	I1201 20:07:49.529885  345040 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:07:49.529895  345040 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1201 20:07:49.530016  345040 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/config.json ...
	I1201 20:07:49.530019  345040 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:07:49.555746  345040 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:07:49.555769  345040 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:07:49.555788  345040 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:07:49.555831  345040 start.go:360] acquireMachinesLock for old-k8s-version-217464: {Name:mkc4365980251c10c3c1ecbb8bf9a930e1d6a78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:07:49.555898  345040 start.go:364] duration metric: took 43.667µs to acquireMachinesLock for "old-k8s-version-217464"
	I1201 20:07:49.555919  345040 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:07:49.555928  345040 fix.go:54] fixHost starting: 
	I1201 20:07:49.556191  345040 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:07:49.578800  345040 fix.go:112] recreateIfNeeded on old-k8s-version-217464: state=Stopped err=<nil>
	W1201 20:07:49.578828  345040 fix.go:138] unexpected machine state, will restart: <nil>
	W1201 20:07:45.945877  327969 node_ready.go:57] node "no-preload-240359" has "Ready":"False" status (will retry)
	W1201 20:07:48.046722  327969 node_ready.go:57] node "no-preload-240359" has "Ready":"False" status (will retry)
	I1201 20:07:48.465107  327969 node_ready.go:49] node "no-preload-240359" is "Ready"
	I1201 20:07:48.465145  327969 node_ready.go:38] duration metric: took 13.522447122s for node "no-preload-240359" to be "Ready" ...
	I1201 20:07:48.465161  327969 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:07:48.465219  327969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:07:48.482277  327969 api_server.go:72] duration metric: took 13.892147682s to wait for apiserver process to appear ...
	I1201 20:07:48.482332  327969 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:07:48.482355  327969 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 20:07:48.554505  327969 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1201 20:07:48.556110  327969 api_server.go:141] control plane version: v1.35.0-beta.0
	I1201 20:07:48.556137  327969 api_server.go:131] duration metric: took 73.797467ms to wait for apiserver health ...
	I1201 20:07:48.556148  327969 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:07:48.562079  327969 system_pods.go:59] 8 kube-system pods found
	I1201 20:07:48.562129  327969 system_pods.go:61] "coredns-7d764666f9-6kzhv" [63c28884-3390-44f0-ba81-6f221ef923c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:07:48.562138  327969 system_pods.go:61] "etcd-no-preload-240359" [36c1e813-01d2-4cb6-a36b-50d4026fdac2] Running
	I1201 20:07:48.562147  327969 system_pods.go:61] "kindnet-s7r55" [a8fe8570-bbb2-401d-92b1-9335633eea45] Running
	I1201 20:07:48.562153  327969 system_pods.go:61] "kube-apiserver-no-preload-240359" [a5fc590d-d925-47b0-b6d9-9f1e418f59e5] Running
	I1201 20:07:48.562158  327969 system_pods.go:61] "kube-controller-manager-no-preload-240359" [f334535d-397f-428e-a4da-4f64d87d3283] Running
	I1201 20:07:48.562163  327969 system_pods.go:61] "kube-proxy-zbbsb" [6e217924-5490-46d1-80c4-6354cd3c3f87] Running
	I1201 20:07:48.562168  327969 system_pods.go:61] "kube-scheduler-no-preload-240359" [4c309867-a6ed-4318-86a1-cba4e8178d41] Running
	I1201 20:07:48.562173  327969 system_pods.go:61] "storage-provisioner" [55ca6ba6-903c-41c3-bf2f-47e674a452bc] Pending
	I1201 20:07:48.562180  327969 system_pods.go:74] duration metric: took 6.025957ms to wait for pod list to return data ...
	I1201 20:07:48.562190  327969 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:07:48.579495  327969 default_sa.go:45] found service account: "default"
	I1201 20:07:48.579547  327969 default_sa.go:55] duration metric: took 17.349787ms for default service account to be created ...
	I1201 20:07:48.579569  327969 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:07:48.583567  327969 system_pods.go:86] 8 kube-system pods found
	I1201 20:07:48.583642  327969 system_pods.go:89] "coredns-7d764666f9-6kzhv" [63c28884-3390-44f0-ba81-6f221ef923c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:07:48.583650  327969 system_pods.go:89] "etcd-no-preload-240359" [36c1e813-01d2-4cb6-a36b-50d4026fdac2] Running
	I1201 20:07:48.583661  327969 system_pods.go:89] "kindnet-s7r55" [a8fe8570-bbb2-401d-92b1-9335633eea45] Running
	I1201 20:07:48.583667  327969 system_pods.go:89] "kube-apiserver-no-preload-240359" [a5fc590d-d925-47b0-b6d9-9f1e418f59e5] Running
	I1201 20:07:48.583672  327969 system_pods.go:89] "kube-controller-manager-no-preload-240359" [f334535d-397f-428e-a4da-4f64d87d3283] Running
	I1201 20:07:48.583677  327969 system_pods.go:89] "kube-proxy-zbbsb" [6e217924-5490-46d1-80c4-6354cd3c3f87] Running
	I1201 20:07:48.583683  327969 system_pods.go:89] "kube-scheduler-no-preload-240359" [4c309867-a6ed-4318-86a1-cba4e8178d41] Running
	I1201 20:07:48.583737  327969 system_pods.go:89] "storage-provisioner" [55ca6ba6-903c-41c3-bf2f-47e674a452bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:07:48.583791  327969 retry.go:31] will retry after 219.203786ms: missing components: kube-dns
	I1201 20:07:48.807776  327969 system_pods.go:86] 8 kube-system pods found
	I1201 20:07:48.807812  327969 system_pods.go:89] "coredns-7d764666f9-6kzhv" [63c28884-3390-44f0-ba81-6f221ef923c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:07:48.807821  327969 system_pods.go:89] "etcd-no-preload-240359" [36c1e813-01d2-4cb6-a36b-50d4026fdac2] Running
	I1201 20:07:48.807828  327969 system_pods.go:89] "kindnet-s7r55" [a8fe8570-bbb2-401d-92b1-9335633eea45] Running
	I1201 20:07:48.807833  327969 system_pods.go:89] "kube-apiserver-no-preload-240359" [a5fc590d-d925-47b0-b6d9-9f1e418f59e5] Running
	I1201 20:07:48.807839  327969 system_pods.go:89] "kube-controller-manager-no-preload-240359" [f334535d-397f-428e-a4da-4f64d87d3283] Running
	I1201 20:07:48.807846  327969 system_pods.go:89] "kube-proxy-zbbsb" [6e217924-5490-46d1-80c4-6354cd3c3f87] Running
	I1201 20:07:48.807855  327969 system_pods.go:89] "kube-scheduler-no-preload-240359" [4c309867-a6ed-4318-86a1-cba4e8178d41] Running
	I1201 20:07:48.807863  327969 system_pods.go:89] "storage-provisioner" [55ca6ba6-903c-41c3-bf2f-47e674a452bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:07:48.807883  327969 retry.go:31] will retry after 343.152219ms: missing components: kube-dns
	I1201 20:07:49.155137  327969 system_pods.go:86] 8 kube-system pods found
	I1201 20:07:49.155177  327969 system_pods.go:89] "coredns-7d764666f9-6kzhv" [63c28884-3390-44f0-ba81-6f221ef923c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:07:49.155188  327969 system_pods.go:89] "etcd-no-preload-240359" [36c1e813-01d2-4cb6-a36b-50d4026fdac2] Running
	I1201 20:07:49.155197  327969 system_pods.go:89] "kindnet-s7r55" [a8fe8570-bbb2-401d-92b1-9335633eea45] Running
	I1201 20:07:49.155203  327969 system_pods.go:89] "kube-apiserver-no-preload-240359" [a5fc590d-d925-47b0-b6d9-9f1e418f59e5] Running
	I1201 20:07:49.155363  327969 system_pods.go:89] "kube-controller-manager-no-preload-240359" [f334535d-397f-428e-a4da-4f64d87d3283] Running
	I1201 20:07:49.155397  327969 system_pods.go:89] "kube-proxy-zbbsb" [6e217924-5490-46d1-80c4-6354cd3c3f87] Running
	I1201 20:07:49.155404  327969 system_pods.go:89] "kube-scheduler-no-preload-240359" [4c309867-a6ed-4318-86a1-cba4e8178d41] Running
	I1201 20:07:49.155414  327969 system_pods.go:89] "storage-provisioner" [55ca6ba6-903c-41c3-bf2f-47e674a452bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:07:49.155434  327969 retry.go:31] will retry after 430.693782ms: missing components: kube-dns
	I1201 20:07:49.591988  327969 system_pods.go:86] 8 kube-system pods found
	I1201 20:07:49.592022  327969 system_pods.go:89] "coredns-7d764666f9-6kzhv" [63c28884-3390-44f0-ba81-6f221ef923c4] Running
	I1201 20:07:49.592031  327969 system_pods.go:89] "etcd-no-preload-240359" [36c1e813-01d2-4cb6-a36b-50d4026fdac2] Running
	I1201 20:07:49.592036  327969 system_pods.go:89] "kindnet-s7r55" [a8fe8570-bbb2-401d-92b1-9335633eea45] Running
	I1201 20:07:49.592042  327969 system_pods.go:89] "kube-apiserver-no-preload-240359" [a5fc590d-d925-47b0-b6d9-9f1e418f59e5] Running
	I1201 20:07:49.592048  327969 system_pods.go:89] "kube-controller-manager-no-preload-240359" [f334535d-397f-428e-a4da-4f64d87d3283] Running
	I1201 20:07:49.592053  327969 system_pods.go:89] "kube-proxy-zbbsb" [6e217924-5490-46d1-80c4-6354cd3c3f87] Running
	I1201 20:07:49.592058  327969 system_pods.go:89] "kube-scheduler-no-preload-240359" [4c309867-a6ed-4318-86a1-cba4e8178d41] Running
	I1201 20:07:49.592063  327969 system_pods.go:89] "storage-provisioner" [55ca6ba6-903c-41c3-bf2f-47e674a452bc] Running
	I1201 20:07:49.592073  327969 system_pods.go:126] duration metric: took 1.012496766s to wait for k8s-apps to be running ...
	I1201 20:07:49.592083  327969 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:07:49.592140  327969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:07:49.615349  327969 system_svc.go:56] duration metric: took 23.256986ms WaitForService to wait for kubelet
	I1201 20:07:49.615385  327969 kubeadm.go:587] duration metric: took 15.025259573s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:07:49.615407  327969 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:07:49.623831  327969 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:07:49.623867  327969 node_conditions.go:123] node cpu capacity is 8
	I1201 20:07:49.623888  327969 node_conditions.go:105] duration metric: took 8.476008ms to run NodePressure ...
	I1201 20:07:49.623904  327969 start.go:242] waiting for startup goroutines ...
	I1201 20:07:49.623913  327969 start.go:247] waiting for cluster config update ...
	I1201 20:07:49.623925  327969 start.go:256] writing updated cluster config ...
	I1201 20:07:49.624226  327969 ssh_runner.go:195] Run: rm -f paused
	I1201 20:07:49.631257  327969 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:07:49.637794  327969 pod_ready.go:83] waiting for pod "coredns-7d764666f9-6kzhv" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:49.644835  327969 pod_ready.go:94] pod "coredns-7d764666f9-6kzhv" is "Ready"
	I1201 20:07:49.644864  327969 pod_ready.go:86] duration metric: took 7.040491ms for pod "coredns-7d764666f9-6kzhv" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:49.647550  327969 pod_ready.go:83] waiting for pod "etcd-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:49.652325  327969 pod_ready.go:94] pod "etcd-no-preload-240359" is "Ready"
	I1201 20:07:49.652350  327969 pod_ready.go:86] duration metric: took 4.773464ms for pod "etcd-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:49.654722  327969 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:49.659337  327969 pod_ready.go:94] pod "kube-apiserver-no-preload-240359" is "Ready"
	I1201 20:07:49.659359  327969 pod_ready.go:86] duration metric: took 4.616227ms for pod "kube-apiserver-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:49.661688  327969 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:50.036111  327969 pod_ready.go:94] pod "kube-controller-manager-no-preload-240359" is "Ready"
	I1201 20:07:50.036143  327969 pod_ready.go:86] duration metric: took 374.426482ms for pod "kube-controller-manager-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:50.236198  327969 pod_ready.go:83] waiting for pod "kube-proxy-zbbsb" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:50.636159  327969 pod_ready.go:94] pod "kube-proxy-zbbsb" is "Ready"
	I1201 20:07:50.636185  327969 pod_ready.go:86] duration metric: took 399.964847ms for pod "kube-proxy-zbbsb" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:50.836213  327969 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:51.235530  327969 pod_ready.go:94] pod "kube-scheduler-no-preload-240359" is "Ready"
	I1201 20:07:51.235556  327969 pod_ready.go:86] duration metric: took 399.321978ms for pod "kube-scheduler-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:51.235568  327969 pod_ready.go:40] duration metric: took 1.604263583s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:07:51.278178  327969 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:07:51.280211  327969 out.go:179] * Done! kubectl is now configured to use "no-preload-240359" cluster and "default" namespace by default
	I1201 20:07:49.038105  335220 addons.go:530] duration metric: took 774.908267ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1201 20:07:49.261733  335220 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-990820" context rescaled to 1 replicas
	W1201 20:07:50.827430  335220 node_ready.go:57] node "embed-certs-990820" has "Ready":"False" status (will retry)
	I1201 20:07:49.330223  343871 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Running}}
	I1201 20:07:49.352963  343871 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:07:49.377396  343871 cli_runner.go:164] Run: docker exec default-k8s-diff-port-009682 stat /var/lib/dpkg/alternatives/iptables
	I1201 20:07:49.445148  343871 oci.go:144] the created container "default-k8s-diff-port-009682" has a running status.
	I1201 20:07:49.445186  343871 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa...
	I1201 20:07:49.482681  343871 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1201 20:07:49.519173  343871 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:07:49.544694  343871 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1201 20:07:49.544712  343871 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-009682 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1201 20:07:49.592938  343871 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:07:49.620059  343871 machine.go:94] provisionDockerMachine start ...
	I1201 20:07:49.620169  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:49.647691  343871 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:49.648016  343871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1201 20:07:49.648033  343871 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:07:49.648789  343871 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41656->127.0.0.1:33108: read: connection reset by peer
	I1201 20:07:52.790039  343871 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-009682
	
	I1201 20:07:52.790067  343871 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-009682"
	I1201 20:07:52.790146  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:52.808047  343871 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:52.808332  343871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1201 20:07:52.808353  343871 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-009682 && echo "default-k8s-diff-port-009682" | sudo tee /etc/hostname
	I1201 20:07:52.964935  343871 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-009682
	
	I1201 20:07:52.965034  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:52.986454  343871 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:52.986724  343871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1201 20:07:52.986747  343871 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-009682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-009682/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-009682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:07:53.129794  343871 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:07:53.129817  343871 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:07:53.129858  343871 ubuntu.go:190] setting up certificates
	I1201 20:07:53.129869  343871 provision.go:84] configureAuth start
	I1201 20:07:53.129928  343871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:07:53.148793  343871 provision.go:143] copyHostCerts
	I1201 20:07:53.148863  343871 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:07:53.148877  343871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:07:53.148968  343871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:07:53.149080  343871 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:07:53.149088  343871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:07:53.149119  343871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:07:53.149175  343871 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:07:53.149183  343871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:07:53.149206  343871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:07:53.149254  343871 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-009682 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-009682 localhost minikube]
	I1201 20:07:53.257229  343871 provision.go:177] copyRemoteCerts
	I1201 20:07:53.257299  343871 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:07:53.257351  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:53.278599  343871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:07:53.380552  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:07:53.399580  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1201 20:07:53.416478  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:07:53.436758  343871 provision.go:87] duration metric: took 306.875067ms to configureAuth
	I1201 20:07:53.436788  343871 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:07:53.436997  343871 config.go:182] Loaded profile config "default-k8s-diff-port-009682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:07:53.437136  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:53.457678  343871 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:53.457982  343871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1201 20:07:53.458008  343871 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:07:53.741692  343871 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:07:53.741716  343871 machine.go:97] duration metric: took 4.121631094s to provisionDockerMachine
	I1201 20:07:53.741729  343871 client.go:176] duration metric: took 9.423790827s to LocalClient.Create
	I1201 20:07:53.741751  343871 start.go:167] duration metric: took 9.423858779s to libmachine.API.Create "default-k8s-diff-port-009682"
	I1201 20:07:53.741763  343871 start.go:293] postStartSetup for "default-k8s-diff-port-009682" (driver="docker")
	I1201 20:07:53.741779  343871 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:07:53.741851  343871 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:07:53.741885  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:53.760146  343871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:07:53.861604  343871 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:07:53.865388  343871 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:07:53.865421  343871 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:07:53.865434  343871 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:07:53.865500  343871 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:07:53.865602  343871 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:07:53.865745  343871 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:07:53.873537  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:07:53.895139  343871 start.go:296] duration metric: took 153.35865ms for postStartSetup
	I1201 20:07:53.895534  343871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:07:53.914227  343871 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/config.json ...
	I1201 20:07:53.914495  343871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:07:53.914544  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:53.932721  343871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:07:54.030621  343871 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:07:54.036397  343871 start.go:128] duration metric: took 9.720648247s to createHost
	I1201 20:07:54.036423  343871 start.go:83] releasing machines lock for "default-k8s-diff-port-009682", held for 9.720781142s
	I1201 20:07:54.036484  343871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:07:54.056643  343871 ssh_runner.go:195] Run: cat /version.json
	I1201 20:07:54.056691  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:54.056770  343871 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:07:54.056849  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:54.077058  343871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:07:54.077850  343871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:07:49.580558  345040 out.go:252] * Restarting existing docker container for "old-k8s-version-217464" ...
	I1201 20:07:49.580634  345040 cli_runner.go:164] Run: docker start old-k8s-version-217464
	I1201 20:07:49.878450  345040 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:07:49.898541  345040 kic.go:430] container "old-k8s-version-217464" state is running.
	I1201 20:07:49.898970  345040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-217464
	I1201 20:07:49.920515  345040 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/config.json ...
	I1201 20:07:49.920741  345040 machine.go:94] provisionDockerMachine start ...
	I1201 20:07:49.920796  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:49.942851  345040 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:49.943076  345040 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1201 20:07:49.943089  345040 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:07:49.943770  345040 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41098->127.0.0.1:33113: read: connection reset by peer
	I1201 20:07:53.092863  345040 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-217464
	
	I1201 20:07:53.092889  345040 ubuntu.go:182] provisioning hostname "old-k8s-version-217464"
	I1201 20:07:53.092934  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:53.110764  345040 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:53.110972  345040 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1201 20:07:53.110984  345040 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-217464 && echo "old-k8s-version-217464" | sudo tee /etc/hostname
	I1201 20:07:53.260609  345040 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-217464
	
	I1201 20:07:53.260683  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:53.279956  345040 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:53.280176  345040 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1201 20:07:53.280191  345040 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-217464' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-217464/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-217464' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:07:53.419880  345040 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:07:53.419909  345040 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:07:53.419960  345040 ubuntu.go:190] setting up certificates
	I1201 20:07:53.419987  345040 provision.go:84] configureAuth start
	I1201 20:07:53.420045  345040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-217464
	I1201 20:07:53.440584  345040 provision.go:143] copyHostCerts
	I1201 20:07:53.440638  345040 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:07:53.440646  345040 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:07:53.440708  345040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:07:53.440823  345040 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:07:53.440834  345040 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:07:53.440894  345040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:07:53.441039  345040 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:07:53.441052  345040 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:07:53.441097  345040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:07:53.441174  345040 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-217464 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-217464]
	I1201 20:07:53.518882  345040 provision.go:177] copyRemoteCerts
	I1201 20:07:53.518940  345040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:07:53.518971  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:53.539126  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:53.640027  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:07:53.660835  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1201 20:07:53.678858  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 20:07:53.696471  345040 provision.go:87] duration metric: took 276.468842ms to configureAuth
	I1201 20:07:53.696493  345040 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:07:53.696665  345040 config.go:182] Loaded profile config "old-k8s-version-217464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1201 20:07:53.696777  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:53.716101  345040 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:53.716339  345040 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1201 20:07:53.716357  345040 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:07:54.035977  345040 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:07:54.036009  345040 machine.go:97] duration metric: took 4.115255697s to provisionDockerMachine
	I1201 20:07:54.036023  345040 start.go:293] postStartSetup for "old-k8s-version-217464" (driver="docker")
	I1201 20:07:54.036036  345040 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:07:54.036108  345040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:07:54.036163  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:54.056907  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:54.159946  345040 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:07:54.163424  345040 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:07:54.163465  345040 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:07:54.163480  345040 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:07:54.163532  345040 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:07:54.163636  345040 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:07:54.163768  345040 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:07:54.171506  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:07:54.189265  345040 start.go:296] duration metric: took 153.227057ms for postStartSetup
	I1201 20:07:54.189372  345040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:07:54.189413  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:54.209404  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:54.232560  343871 ssh_runner.go:195] Run: systemctl --version
	I1201 20:07:54.239369  343871 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:07:54.275426  343871 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:07:54.280387  343871 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:07:54.280439  343871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:07:54.306355  343871 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1201 20:07:54.306374  343871 start.go:496] detecting cgroup driver to use...
	I1201 20:07:54.306407  343871 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:07:54.306454  343871 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:07:54.322496  343871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:07:54.337536  343871 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:07:54.337595  343871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:07:54.357441  343871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:07:54.375811  343871 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:07:54.472153  343871 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:07:54.570800  343871 docker.go:234] disabling docker service ...
	I1201 20:07:54.570860  343871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:07:54.591870  343871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:07:54.604849  343871 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:07:54.699578  343871 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:07:54.797563  343871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:07:54.810655  343871 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:07:54.825158  343871 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:07:54.825218  343871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.835785  343871 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:07:54.835842  343871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.844798  343871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.856705  343871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.866860  343871 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:07:54.875211  343871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.884251  343871 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.897922  343871 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.907503  343871 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:07:54.915549  343871 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:07:54.923004  343871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:07:55.003651  343871 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:07:55.161627  343871 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:07:55.161691  343871 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:07:55.165765  343871 start.go:564] Will wait 60s for crictl version
	I1201 20:07:55.165818  343871 ssh_runner.go:195] Run: which crictl
	I1201 20:07:55.169554  343871 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:07:55.196315  343871 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:07:55.196405  343871 ssh_runner.go:195] Run: crio --version
	I1201 20:07:55.225245  343871 ssh_runner.go:195] Run: crio --version
	I1201 20:07:55.254328  343871 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 20:07:54.306181  345040 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:07:54.311115  345040 fix.go:56] duration metric: took 4.755182827s for fixHost
	I1201 20:07:54.311139  345040 start.go:83] releasing machines lock for "old-k8s-version-217464", held for 4.75522957s
	I1201 20:07:54.311188  345040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-217464
	I1201 20:07:54.331137  345040 ssh_runner.go:195] Run: cat /version.json
	I1201 20:07:54.331206  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:54.331225  345040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:07:54.331323  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:54.351091  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:54.352165  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:54.448457  345040 ssh_runner.go:195] Run: systemctl --version
	I1201 20:07:54.504933  345040 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:07:54.549146  345040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:07:54.554144  345040 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:07:54.554199  345040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:07:54.562889  345040 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:07:54.562912  345040 start.go:496] detecting cgroup driver to use...
	I1201 20:07:54.562937  345040 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:07:54.562969  345040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:07:54.578133  345040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:07:54.591629  345040 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:07:54.591698  345040 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:07:54.607915  345040 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:07:54.620877  345040 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:07:54.709963  345040 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:07:54.795738  345040 docker.go:234] disabling docker service ...
	I1201 20:07:54.795804  345040 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:07:54.811814  345040 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:07:54.824350  345040 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:07:54.910089  345040 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:07:55.000161  345040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:07:55.013327  345040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:07:55.028735  345040 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1201 20:07:55.028792  345040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.037931  345040 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:07:55.037983  345040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.046682  345040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.056313  345040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.065476  345040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:07:55.074420  345040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.083356  345040 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.092454  345040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.103551  345040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:07:55.111538  345040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:07:55.118797  345040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:07:55.206454  345040 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:07:55.355748  345040 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:07:55.355826  345040 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:07:55.360030  345040 start.go:564] Will wait 60s for crictl version
	I1201 20:07:55.360087  345040 ssh_runner.go:195] Run: which crictl
	I1201 20:07:55.363806  345040 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:07:55.389953  345040 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:07:55.390053  345040 ssh_runner.go:195] Run: crio --version
	I1201 20:07:55.420750  345040 ssh_runner.go:195] Run: crio --version
	I1201 20:07:55.454432  345040 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	W1201 20:07:52.828180  335220 node_ready.go:57] node "embed-certs-990820" has "Ready":"False" status (will retry)
	W1201 20:07:55.327898  335220 node_ready.go:57] node "embed-certs-990820" has "Ready":"False" status (will retry)
	I1201 20:07:55.455704  345040 cli_runner.go:164] Run: docker network inspect old-k8s-version-217464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:07:55.475323  345040 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1201 20:07:55.479564  345040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:07:55.489945  345040 kubeadm.go:884] updating cluster {Name:old-k8s-version-217464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-217464 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:07:55.490076  345040 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1201 20:07:55.490147  345040 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:07:55.530140  345040 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:07:55.530166  345040 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:07:55.530218  345040 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:07:55.557241  345040 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:07:55.557268  345040 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:07:55.557283  345040 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1201 20:07:55.557468  345040 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-217464 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-217464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:07:55.557563  345040 ssh_runner.go:195] Run: crio config
	I1201 20:07:55.605791  345040 cni.go:84] Creating CNI manager for ""
	I1201 20:07:55.605818  345040 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:07:55.605835  345040 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:07:55.605859  345040 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-217464 NodeName:old-k8s-version-217464 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:07:55.606080  345040 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-217464"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:07:55.606163  345040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1201 20:07:55.614793  345040 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:07:55.614853  345040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:07:55.623764  345040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1201 20:07:55.636737  345040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:07:55.649368  345040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1201 20:07:55.662550  345040 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:07:55.666352  345040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:07:55.676964  345040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:07:55.762132  345040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:07:55.792780  345040 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464 for IP: 192.168.76.2
	I1201 20:07:55.792802  345040 certs.go:195] generating shared ca certs ...
	I1201 20:07:55.792822  345040 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.792977  345040 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:07:55.793032  345040 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:07:55.793044  345040 certs.go:257] generating profile certs ...
	I1201 20:07:55.793166  345040 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/client.key
	I1201 20:07:55.793248  345040 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/apiserver.key.6b4b6768
	I1201 20:07:55.793332  345040 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/proxy-client.key
	I1201 20:07:55.793478  345040 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:07:55.793523  345040 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:07:55.793535  345040 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:07:55.793571  345040 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:07:55.793605  345040 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:07:55.793636  345040 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:07:55.793699  345040 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:07:55.794463  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:07:55.813962  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:07:55.833693  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:07:55.853214  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:07:55.874583  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1201 20:07:55.897128  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:07:55.915812  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:07:55.934097  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 20:07:55.951563  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:07:55.968980  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:07:55.986866  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:07:56.006254  345040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:07:56.019243  345040 ssh_runner.go:195] Run: openssl version
	I1201 20:07:56.025496  345040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:07:56.034731  345040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:56.038605  345040 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:56.038665  345040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:56.074232  345040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:07:56.083209  345040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:07:56.091774  345040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:07:56.095818  345040 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:07:56.095874  345040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:07:56.131547  345040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:07:56.139470  345040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:07:56.148347  345040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:07:56.152876  345040 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:07:56.152927  345040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:07:56.189474  345040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:07:56.198393  345040 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:07:56.202348  345040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:07:56.237949  345040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:07:56.273429  345040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:07:56.318973  345040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:07:56.370231  345040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:07:56.422011  345040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:07:56.488771  345040 kubeadm.go:401] StartCluster: {Name:old-k8s-version-217464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-217464 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:07:56.488879  345040 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:07:56.488937  345040 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:07:56.527703  345040 cri.go:89] found id: "9d50552004acc3398e94380698eb07bb142aa3e02f7fbe0cc985eae7f0f37421"
	I1201 20:07:56.527722  345040 cri.go:89] found id: "50a711978543faddbcd266e3bb43a6bebfd689f26e2a35fcfedb4e228ede9591"
	I1201 20:07:56.527726  345040 cri.go:89] found id: "4649c73be5eb94a99d98990312bb2e4e017cd402e18aca29e4f14aacf404c25f"
	I1201 20:07:56.527731  345040 cri.go:89] found id: "604c30dbad503e870547eb7624c394a7a220a65ecf82f3dccc6f24eca1a93428"
	I1201 20:07:56.527747  345040 cri.go:89] found id: ""
	I1201 20:07:56.527782  345040 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:07:56.540424  345040 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:07:56Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:07:56.540499  345040 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:07:56.551101  345040 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:07:56.551121  345040 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:07:56.551212  345040 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:07:56.559726  345040 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:07:56.560571  345040 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-217464" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:07:56.561282  345040 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-217464" cluster setting kubeconfig missing "old-k8s-version-217464" context setting]
	I1201 20:07:56.562609  345040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:56.564740  345040 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:07:56.573859  345040 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1201 20:07:56.573888  345040 kubeadm.go:602] duration metric: took 22.751807ms to restartPrimaryControlPlane
	I1201 20:07:56.573897  345040 kubeadm.go:403] duration metric: took 85.13563ms to StartCluster
	I1201 20:07:56.573914  345040 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:56.573974  345040 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:07:56.576130  345040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:56.576672  345040 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:07:56.576707  345040 config.go:182] Loaded profile config "old-k8s-version-217464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1201 20:07:56.577037  345040 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:07:56.577146  345040 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-217464"
	I1201 20:07:56.577165  345040 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-217464"
	I1201 20:07:56.577163  345040 addons.go:70] Setting dashboard=true in profile "old-k8s-version-217464"
	W1201 20:07:56.577173  345040 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:07:56.577187  345040 addons.go:239] Setting addon dashboard=true in "old-k8s-version-217464"
	W1201 20:07:56.577197  345040 addons.go:248] addon dashboard should already be in state true
	I1201 20:07:56.577204  345040 host.go:66] Checking if "old-k8s-version-217464" exists ...
	I1201 20:07:56.577231  345040 host.go:66] Checking if "old-k8s-version-217464" exists ...
	I1201 20:07:56.577189  345040 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-217464"
	I1201 20:07:56.577334  345040 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-217464"
	I1201 20:07:56.577729  345040 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:07:56.577758  345040 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:07:56.578107  345040 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:07:56.581708  345040 out.go:179] * Verifying Kubernetes components...
	I1201 20:07:56.583053  345040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:07:56.605382  345040 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:07:56.605790  345040 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-217464"
	W1201 20:07:56.605813  345040 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:07:56.605841  345040 host.go:66] Checking if "old-k8s-version-217464" exists ...
	I1201 20:07:56.606387  345040 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:07:56.606610  345040 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:07:56.606628  345040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:07:56.606673  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:56.611193  345040 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:07:56.612471  345040 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:07:55.255649  343871 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-009682 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:07:55.275029  343871 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1201 20:07:55.279251  343871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:07:55.290100  343871 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:07:55.290215  343871 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:07:55.290262  343871 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:07:55.324234  343871 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:07:55.324253  343871 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:07:55.324309  343871 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:07:55.349974  343871 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:07:55.349997  343871 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:07:55.350006  343871 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1201 20:07:55.350178  343871 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-009682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:07:55.350243  343871 ssh_runner.go:195] Run: crio config
	I1201 20:07:55.398778  343871 cni.go:84] Creating CNI manager for ""
	I1201 20:07:55.398800  343871 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:07:55.398818  343871 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:07:55.398883  343871 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-009682 NodeName:default-k8s-diff-port-009682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:07:55.399004  343871 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-009682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:07:55.399079  343871 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:07:55.407875  343871 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:07:55.407961  343871 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:07:55.416610  343871 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1201 20:07:55.431116  343871 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:07:55.448684  343871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1201 20:07:55.463113  343871 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:07:55.467424  343871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:07:55.478713  343871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:07:55.562581  343871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:07:55.589935  343871 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682 for IP: 192.168.103.2
	I1201 20:07:55.589951  343871 certs.go:195] generating shared ca certs ...
	I1201 20:07:55.589966  343871 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.590133  343871 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:07:55.590184  343871 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:07:55.590196  343871 certs.go:257] generating profile certs ...
	I1201 20:07:55.590261  343871 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.key
	I1201 20:07:55.590281  343871 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.crt with IP's: []
	I1201 20:07:55.720354  343871 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.crt ...
	I1201 20:07:55.720381  343871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.crt: {Name:mk3534163d936160446daade155159815f0a82ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.720544  343871 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.key ...
	I1201 20:07:55.720559  343871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.key: {Name:mkf9595ac97f87c1b0c1306a7e2c55a45fcf6771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.720642  343871 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key.6e926564
	I1201 20:07:55.720662  343871 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt.6e926564 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1201 20:07:55.814484  343871 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt.6e926564 ...
	I1201 20:07:55.814559  343871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt.6e926564: {Name:mke28397abe478ff9401c23a10947ad67439f4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.814751  343871 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key.6e926564 ...
	I1201 20:07:55.814775  343871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key.6e926564: {Name:mk6154355226d3266d30366a517b2f4bc80bc0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.814891  343871 certs.go:382] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt.6e926564 -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt
	I1201 20:07:55.814963  343871 certs.go:386] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key.6e926564 -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key
	I1201 20:07:55.815015  343871 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key
	I1201 20:07:55.815029  343871 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.crt with IP's: []
	I1201 20:07:55.896420  343871 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.crt ...
	I1201 20:07:55.896447  343871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.crt: {Name:mk76d91b7411e61bd0d00e522f7e37f278f501bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.896614  343871 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key ...
	I1201 20:07:55.896637  343871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key: {Name:mkbe4b2c3a65cb8729d8862f90c087d0dbb635d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.896880  343871 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:07:55.896940  343871 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:07:55.896956  343871 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:07:55.897003  343871 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:07:55.897039  343871 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:07:55.897075  343871 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:07:55.897150  343871 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:07:55.897864  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:07:55.916265  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:07:55.934643  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:07:55.951972  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:07:55.969101  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1201 20:07:55.987093  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:07:56.006501  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:07:56.024552  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:07:56.042507  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:07:56.061838  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:07:56.080447  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:07:56.098873  343871 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:07:56.111570  343871 ssh_runner.go:195] Run: openssl version
	I1201 20:07:56.117710  343871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:07:56.126280  343871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:07:56.130032  343871 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:07:56.130117  343871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:07:56.168091  343871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:07:56.176983  343871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:07:56.185768  343871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:07:56.189782  343871 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:07:56.189834  343871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:07:56.225687  343871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:07:56.235117  343871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:07:56.243874  343871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:56.247766  343871 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:56.247821  343871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:56.283510  343871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:07:56.292619  343871 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:07:56.296570  343871 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 20:07:56.296635  343871 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:07:56.296735  343871 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:07:56.296787  343871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:07:56.337314  343871 cri.go:89] found id: ""
	I1201 20:07:56.337391  343871 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:07:56.346840  343871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:07:56.355855  343871 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 20:07:56.355921  343871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:07:56.365679  343871 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 20:07:56.365699  343871 kubeadm.go:158] found existing configuration files:
	
	I1201 20:07:56.365746  343871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1201 20:07:56.374266  343871 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 20:07:56.374348  343871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 20:07:56.385029  343871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1201 20:07:56.395639  343871 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 20:07:56.395691  343871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:07:56.405820  343871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1201 20:07:56.415916  343871 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 20:07:56.416060  343871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:07:56.427393  343871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1201 20:07:56.438149  343871 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 20:07:56.438211  343871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:07:56.449334  343871 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 20:07:56.513858  343871 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1201 20:07:56.513956  343871 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 20:07:56.544320  343871 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 20:07:56.544415  343871 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1201 20:07:56.544469  343871 kubeadm.go:319] OS: Linux
	I1201 20:07:56.544634  343871 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 20:07:56.544726  343871 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 20:07:56.544806  343871 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 20:07:56.544883  343871 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 20:07:56.544959  343871 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 20:07:56.545026  343871 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 20:07:56.545110  343871 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 20:07:56.545174  343871 kubeadm.go:319] CGROUPS_IO: enabled
	I1201 20:07:56.642013  343871 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 20:07:56.642156  343871 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 20:07:56.642299  343871 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 20:07:56.653887  343871 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 20:07:56.655639  343871 out.go:252]   - Generating certificates and keys ...
	I1201 20:07:56.658055  343871 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 20:07:56.658360  343871 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 20:07:57.069239  343871 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 20:07:57.227835  343871 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 20:07:57.306854  343871 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 20:07:57.663805  343871 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 20:07:58.608348  343871 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 20:07:58.608583  343871 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-009682 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1201 20:07:58.769470  343871 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 20:07:58.769771  343871 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-009682 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1201 20:07:56.613662  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1201 20:07:56.613718  345040 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1201 20:07:56.613816  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:56.636129  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:56.651433  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:56.652045  345040 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:07:56.652069  345040 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:07:56.652122  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:56.679120  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:56.752013  345040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:07:56.769121  345040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:07:56.771814  345040 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-217464" to be "Ready" ...
	I1201 20:07:56.776135  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1201 20:07:56.776154  345040 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1201 20:07:56.796044  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1201 20:07:56.796071  345040 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1201 20:07:56.800968  345040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:07:56.813467  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1201 20:07:56.813488  345040 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1201 20:07:56.831426  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1201 20:07:56.831448  345040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1201 20:07:56.850338  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1201 20:07:56.850363  345040 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1201 20:07:56.866863  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1201 20:07:56.866901  345040 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1201 20:07:56.883338  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1201 20:07:56.883365  345040 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1201 20:07:56.900964  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1201 20:07:56.900988  345040 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1201 20:07:56.915120  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:07:56.915144  345040 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1201 20:07:56.931193  345040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:07:59.300079  345040 node_ready.go:49] node "old-k8s-version-217464" is "Ready"
	I1201 20:07:59.300120  345040 node_ready.go:38] duration metric: took 2.528276304s for node "old-k8s-version-217464" to be "Ready" ...
	I1201 20:07:59.300135  345040 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:07:59.300183  345040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:07:59.969570  345040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.200410076s)
	I1201 20:07:59.969686  345040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.168690088s)
	I1201 20:08:00.413735  345040 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.113525969s)
	I1201 20:08:00.413778  345040 api_server.go:72] duration metric: took 3.837072549s to wait for apiserver process to appear ...
	I1201 20:08:00.413787  345040 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:08:00.413810  345040 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:08:00.414360  345040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.483125858s)
	I1201 20:08:00.415891  345040 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-217464 addons enable metrics-server
	
	I1201 20:08:00.417121  345040 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	
	
	==> CRI-O <==
	Dec 01 20:07:48 no-preload-240359 crio[772]: time="2025-12-01T20:07:48.980597002Z" level=info msg="Started container" PID=2813 containerID=f3de7eff9456ab05f3363421d0c9790a79eba41fb65d41318db1880e3a7cda11 description=kube-system/coredns-7d764666f9-6kzhv/coredns id=67818ca1-459c-4ea0-8046-7e916141823e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4bb65d294d3a4a225bfb701c59b0a796f2c491c5d1fa883c47dc8e4edf13c1fb
	Dec 01 20:07:48 no-preload-240359 crio[772]: time="2025-12-01T20:07:48.982209347Z" level=info msg="Started container" PID=2810 containerID=1bd5bec372748877ee58bc5de96b6a3b076020a3cfb7dcb571428a05cd9822e5 description=kube-system/storage-provisioner/storage-provisioner id=c2e47db1-cbde-4a57-ab3e-009c0b3ee846 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2cb40c098d0034286491127458f444fd3f850349fdc6dd7c92cac6cde35217f7
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.727438731Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0e7bb8eb-3e3d-4990-8c09-35ae4c21fb9d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.727511473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.732231106Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e64ec37c090704e64661a4f14e8f4842ab1ee19a3113003e8d1fa745a624c96c UID:1c152aeb-d4c6-436a-96ef-96d8dff15eba NetNS:/var/run/netns/9051ec94-33df-455c-99c2-3137373a2442 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000794b18}] Aliases:map[]}"
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.732261169Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.74200963Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e64ec37c090704e64661a4f14e8f4842ab1ee19a3113003e8d1fa745a624c96c UID:1c152aeb-d4c6-436a-96ef-96d8dff15eba NetNS:/var/run/netns/9051ec94-33df-455c-99c2-3137373a2442 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000794b18}] Aliases:map[]}"
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.742140256Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.742883471Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.743873204Z" level=info msg="Ran pod sandbox e64ec37c090704e64661a4f14e8f4842ab1ee19a3113003e8d1fa745a624c96c with infra container: default/busybox/POD" id=0e7bb8eb-3e3d-4990-8c09-35ae4c21fb9d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.745302248Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ea76d07f-9d9d-497d-9b9f-9a128ef90e21 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.745429771Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ea76d07f-9d9d-497d-9b9f-9a128ef90e21 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.7454643Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ea76d07f-9d9d-497d-9b9f-9a128ef90e21 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.746203662Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=efa1db5d-cfd9-4be5-81f7-f290e9b009da name=/runtime.v1.ImageService/PullImage
	Dec 01 20:07:51 no-preload-240359 crio[772]: time="2025-12-01T20:07:51.747675915Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 01 20:07:53 no-preload-240359 crio[772]: time="2025-12-01T20:07:53.006850787Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=efa1db5d-cfd9-4be5-81f7-f290e9b009da name=/runtime.v1.ImageService/PullImage
	Dec 01 20:07:53 no-preload-240359 crio[772]: time="2025-12-01T20:07:53.007474596Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd2c2194-435f-4e7c-9ca7-9f689df88bd3 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:07:53 no-preload-240359 crio[772]: time="2025-12-01T20:07:53.009085115Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b0cbb3ec-007f-4fdf-8d80-c2b6cd3ec5a3 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:07:53 no-preload-240359 crio[772]: time="2025-12-01T20:07:53.011982542Z" level=info msg="Creating container: default/busybox/busybox" id=0cdf1113-ef65-42a3-89a4-597f27bfb98d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:07:53 no-preload-240359 crio[772]: time="2025-12-01T20:07:53.012087635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:07:53 no-preload-240359 crio[772]: time="2025-12-01T20:07:53.016345105Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:07:53 no-preload-240359 crio[772]: time="2025-12-01T20:07:53.016823819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:07:53 no-preload-240359 crio[772]: time="2025-12-01T20:07:53.04181936Z" level=info msg="Created container 1a147ed6e11ce684abdf5b5bf9d814c769ad59edb06e2366c0c2ca390fc5c01b: default/busybox/busybox" id=0cdf1113-ef65-42a3-89a4-597f27bfb98d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:07:53 no-preload-240359 crio[772]: time="2025-12-01T20:07:53.042651927Z" level=info msg="Starting container: 1a147ed6e11ce684abdf5b5bf9d814c769ad59edb06e2366c0c2ca390fc5c01b" id=8adaa963-6d91-46db-b4fa-fc7712e248fb name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:07:53 no-preload-240359 crio[772]: time="2025-12-01T20:07:53.044493645Z" level=info msg="Started container" PID=2885 containerID=1a147ed6e11ce684abdf5b5bf9d814c769ad59edb06e2366c0c2ca390fc5c01b description=default/busybox/busybox id=8adaa963-6d91-46db-b4fa-fc7712e248fb name=/runtime.v1.RuntimeService/StartContainer sandboxID=e64ec37c090704e64661a4f14e8f4842ab1ee19a3113003e8d1fa745a624c96c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1a147ed6e11ce       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   e64ec37c09070       busybox                                     default
	f3de7eff9456a       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   4bb65d294d3a4       coredns-7d764666f9-6kzhv                    kube-system
	1bd5bec372748       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   2cb40c098d003       storage-provisioner                         kube-system
	391bc22fa6b40       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   fcc1dd1a600ad       kindnet-s7r55                               kube-system
	76cdb58a3a328       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      26 seconds ago      Running             kube-proxy                0                   5d17113baf460       kube-proxy-zbbsb                            kube-system
	ca5eff114f6d9       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      37 seconds ago      Running             kube-scheduler            0                   8ef3bf0ccc32f       kube-scheduler-no-preload-240359            kube-system
	b7d8648b82ef2       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      37 seconds ago      Running             kube-controller-manager   0                   f307a0d3df834       kube-controller-manager-no-preload-240359   kube-system
	ece8658e996ab       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      37 seconds ago      Running             etcd                      0                   5d40633da1504       etcd-no-preload-240359                      kube-system
	03063906d963d       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      37 seconds ago      Running             kube-apiserver            0                   8de9650e5dabf       kube-apiserver-no-preload-240359            kube-system
	
	
	==> coredns [f3de7eff9456ab05f3363421d0c9790a79eba41fb65d41318db1880e3a7cda11] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49302 - 24211 "HINFO IN 5425342318905401974.7533057901191375046. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049518193s
	
	
	==> describe nodes <==
	Name:               no-preload-240359
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-240359
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=no-preload-240359
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_07_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:07:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-240359
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:07:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:08:00 +0000   Mon, 01 Dec 2025 20:07:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:08:00 +0000   Mon, 01 Dec 2025 20:07:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:08:00 +0000   Mon, 01 Dec 2025 20:07:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:08:00 +0000   Mon, 01 Dec 2025 20:07:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-240359
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                061d53b4-7f5d-40c9-8604-f01915628ca1
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-6kzhv                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-240359                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-s7r55                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-240359             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-240359    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-zbbsb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-240359             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node no-preload-240359 event: Registered Node no-preload-240359 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [ece8658e996ab15747563020c9bb381c9ecbf267605c1814ecd27cbcdcfd11d7] <==
	{"level":"warn","ts":"2025-12-01T20:07:26.831631Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.577123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2025-12-01T20:07:26.831641Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.290702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2025-12-01T20:07:26.831664Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.856606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2025-12-01T20:07:26.831663Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.832649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2025-12-01T20:07:26.831656Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.723965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2025-12-01T20:07:26.831560Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.930559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ipaddresses\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2025-12-01T20:07:26.831585Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.640448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-01T20:07:26.835238Z","caller":"traceutil/trace.go:172","msg":"trace[88430689] range","detail":"{range_begin:/registry/services/specs; range_end:; response_count:0; response_revision:2; }","duration":"111.287066ms","start":"2025-12-01T20:07:26.723941Z","end":"2025-12-01T20:07:26.835228Z","steps":["trace[88430689] 'agreement among raft nodes before linearized reading'  (duration: 107.629572ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:26.835241Z","caller":"traceutil/trace.go:172","msg":"trace[1029995249] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:2; }","duration":"108.165278ms","start":"2025-12-01T20:07:26.727063Z","end":"2025-12-01T20:07:26.835229Z","steps":["trace[1029995249] 'agreement among raft nodes before linearized reading'  (duration: 104.545205ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:26.835249Z","caller":"traceutil/trace.go:172","msg":"trace[1310921385] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:2; }","duration":"107.433241ms","start":"2025-12-01T20:07:26.727804Z","end":"2025-12-01T20:07:26.835237Z","steps":["trace[1310921385] 'agreement among raft nodes before linearized reading'  (duration: 103.847966ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:26.835429Z","caller":"traceutil/trace.go:172","msg":"trace[531756269] range","detail":"{range_begin:/registry/leases/kube-node-lease/no-preload-240359; range_end:; response_count:0; response_revision:2; }","duration":"107.408688ms","start":"2025-12-01T20:07:26.728010Z","end":"2025-12-01T20:07:26.835418Z","steps":["trace[531756269] 'agreement among raft nodes before linearized reading'  (duration: 103.611088ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:26.835457Z","caller":"traceutil/trace.go:172","msg":"trace[401307353] range","detail":"{range_begin:/registry/csidrivers; range_end:; response_count:0; response_revision:2; }","duration":"112.00225ms","start":"2025-12-01T20:07:26.723444Z","end":"2025-12-01T20:07:26.835447Z","steps":["trace[401307353] 'agreement among raft nodes before linearized reading'  (duration: 107.887181ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:26.835551Z","caller":"traceutil/trace.go:172","msg":"trace[931748675] range","detail":"{range_begin:/registry/validatingwebhookconfigurations; range_end:; response_count:0; response_revision:2; }","duration":"107.491088ms","start":"2025-12-01T20:07:26.728050Z","end":"2025-12-01T20:07:26.835542Z","steps":["trace[931748675] 'agreement among raft nodes before linearized reading'  (duration: 103.568415ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:26.835700Z","caller":"traceutil/trace.go:172","msg":"trace[1864994573] range","detail":"{range_begin:/registry/validatingadmissionpolicies; range_end:; response_count:0; response_revision:2; }","duration":"107.345711ms","start":"2025-12-01T20:07:26.728346Z","end":"2025-12-01T20:07:26.835692Z","steps":["trace[1864994573] 'agreement among raft nodes before linearized reading'  (duration: 103.282128ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:26.835751Z","caller":"traceutil/trace.go:172","msg":"trace[847285459] range","detail":"{range_begin:/registry/clusterrolebindings; range_end:; response_count:0; response_revision:2; }","duration":"107.816576ms","start":"2025-12-01T20:07:26.727925Z","end":"2025-12-01T20:07:26.835741Z","steps":["trace[847285459] 'agreement among raft nodes before linearized reading'  (duration: 103.706982ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:26.835857Z","caller":"traceutil/trace.go:172","msg":"trace[983042821] range","detail":"{range_begin:/registry/persistentvolumeclaims; range_end:; response_count:0; response_revision:2; }","duration":"108.008782ms","start":"2025-12-01T20:07:26.727826Z","end":"2025-12-01T20:07:26.835835Z","steps":["trace[983042821] 'agreement among raft nodes before linearized reading'  (duration: 103.816068ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:26.835903Z","caller":"traceutil/trace.go:172","msg":"trace[1635596017] range","detail":"{range_begin:/registry/ipaddresses; range_end:; response_count:0; response_revision:2; }","duration":"112.26806ms","start":"2025-12-01T20:07:26.723626Z","end":"2025-12-01T20:07:26.835894Z","steps":["trace[1635596017] 'agreement among raft nodes before linearized reading'  (duration: 107.918123ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-01T20:07:48.045031Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.43899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-240359\" limit:1 ","response":"range_response_count:1 size:4536"}
	{"level":"info","ts":"2025-12-01T20:07:48.045130Z","caller":"traceutil/trace.go:172","msg":"trace[44179511] range","detail":"{range_begin:/registry/minions/no-preload-240359; range_end:; response_count:1; response_revision:397; }","duration":"100.565152ms","start":"2025-12-01T20:07:47.944555Z","end":"2025-12-01T20:07:48.045120Z","steps":["trace[44179511] 'agreement among raft nodes before linearized reading'  (duration: 91.68456ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.045063Z","caller":"traceutil/trace.go:172","msg":"trace[385536217] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"101.774859ms","start":"2025-12-01T20:07:47.943265Z","end":"2025-12-01T20:07:48.045040Z","steps":["trace[385536217] 'process raft request'  (duration: 93.002086ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.242150Z","caller":"traceutil/trace.go:172","msg":"trace[1680792228] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"187.688187ms","start":"2025-12-01T20:07:48.054425Z","end":"2025-12-01T20:07:48.242113Z","steps":["trace[1680792228] 'process raft request'  (duration: 119.62893ms)","trace[1680792228] 'compare'  (duration: 67.951612ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-01T20:07:48.462364Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.006595ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.85.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-12-01T20:07:48.462384Z","caller":"traceutil/trace.go:172","msg":"trace[596891651] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"214.534583ms","start":"2025-12-01T20:07:48.247828Z","end":"2025-12-01T20:07:48.462363Z","steps":["trace[596891651] 'process raft request'  (duration: 155.982714ms)","trace[596891651] 'compare'  (duration: 58.406162ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-01T20:07:48.463033Z","caller":"traceutil/trace.go:172","msg":"trace[94005045] range","detail":"{range_begin:/registry/masterleases/192.168.85.2; range_end:; response_count:1; response_revision:402; }","duration":"154.114659ms","start":"2025-12-01T20:07:48.308322Z","end":"2025-12-01T20:07:48.462437Z","steps":["trace[94005045] 'agreement among raft nodes before linearized reading'  (duration: 95.438597ms)","trace[94005045] 'range keys from in-memory index tree'  (duration: 58.446235ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-01T20:07:48.463429Z","caller":"traceutil/trace.go:172","msg":"trace[1961553412] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"153.622473ms","start":"2025-12-01T20:07:48.309795Z","end":"2025-12-01T20:07:48.463418Z","steps":["trace[1961553412] 'process raft request'  (duration: 153.540132ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:08:02 up  1:50,  0 user,  load average: 3.93, 3.21, 2.25
	Linux no-preload-240359 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [391bc22fa6b4033d4b6b3445a7f408dbe07592ea5b8a04a20d63710236bb4477] <==
	I1201 20:07:37.215334       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:07:37.215699       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1201 20:07:37.215883       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:07:37.215903       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:07:37.215930       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:07:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:07:37.416822       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:07:37.417191       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:07:37.417220       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:07:37.417367       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:07:37.917441       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:07:37.917571       1 metrics.go:72] Registering metrics
	I1201 20:07:37.917785       1 controller.go:711] "Syncing nftables rules"
	I1201 20:07:47.419862       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 20:07:47.419934       1 main.go:301] handling current node
	I1201 20:07:57.419831       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 20:07:57.419880       1 main.go:301] handling current node
	
	
	==> kube-apiserver [03063906d963dcdc78c93b09f788319442ff00c332f3ce5f7e81188ec7de8c59] <==
	I1201 20:07:26.903631       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:07:26.904473       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1201 20:07:26.905141       1 controller.go:667] quota admission added evaluator for: namespaces
	I1201 20:07:26.910169       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1201 20:07:26.910372       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:07:26.918541       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:07:27.097183       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:07:27.710029       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1201 20:07:27.715450       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1201 20:07:27.715469       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1201 20:07:28.212883       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:07:28.253347       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:07:28.313843       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1201 20:07:28.321011       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1201 20:07:28.322458       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:07:28.327156       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:07:28.728676       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:07:29.446018       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:07:29.463220       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1201 20:07:29.475076       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1201 20:07:34.280637       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:07:34.284857       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:07:34.479275       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:07:34.646925       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1201 20:08:00.540251       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:60336: use of closed network connection
	
	
	==> kube-controller-manager [b7d8648b82ef25bb1c767e259e43fcadbfcee753e50ea30de8e4372e59e23231] <==
	I1201 20:07:33.534048       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.534070       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.534083       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.534087       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.534090       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.534092       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.534098       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.534100       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.534100       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.534107       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.536998       1 range_allocator.go:177] "Sending events to api server"
	I1201 20:07:33.534131       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.537260       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1201 20:07:33.537282       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:07:33.537344       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.533068       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.534110       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.549922       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:07:33.551206       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-240359" podCIDRs=["10.244.0.0/24"]
	I1201 20:07:33.552961       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.634516       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:33.634533       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1201 20:07:33.634537       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1201 20:07:33.650934       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:48.536938       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [76cdb58a3a328ed300012e4513a01a366e217bbbc0ef537aeb163c8ddbc9dbc5] <==
	I1201 20:07:35.097690       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:07:35.161636       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:07:35.261928       1 shared_informer.go:377] "Caches are synced"
	I1201 20:07:35.262002       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1201 20:07:35.262140       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:07:35.284250       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:07:35.284341       1 server_linux.go:136] "Using iptables Proxier"
	I1201 20:07:35.291114       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:07:35.291554       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1201 20:07:35.291674       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:07:35.293150       1 config.go:200] "Starting service config controller"
	I1201 20:07:35.294364       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:07:35.293419       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:07:35.294508       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:07:35.293608       1 config.go:309] "Starting node config controller"
	I1201 20:07:35.293501       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:07:35.294679       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:07:35.294662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:07:35.294725       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:07:35.395282       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:07:35.395446       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 20:07:35.395674       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ca5eff114f6d9c968f3a28c30d45431c4c0f308566912f15af7d01e9de66cce2] <==
	E1201 20:07:27.636551       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1201 20:07:27.637727       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1201 20:07:27.683387       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1201 20:07:27.684373       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1201 20:07:27.684419       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1201 20:07:27.685202       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1201 20:07:27.693569       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1201 20:07:27.694689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1201 20:07:27.705970       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1201 20:07:27.707177       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1201 20:07:27.763807       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1201 20:07:27.764973       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1201 20:07:27.777140       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1201 20:07:27.778441       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1201 20:07:27.782624       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1201 20:07:27.782654       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1201 20:07:27.783720       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1201 20:07:27.783724       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1201 20:07:27.830692       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1201 20:07:27.831815       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1201 20:07:27.919411       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1201 20:07:27.920408       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1201 20:07:27.984834       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1201 20:07:27.985804       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1201 20:07:28.348093       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 01 20:07:34 no-preload-240359 kubelet[2222]: I1201 20:07:34.759820    2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e217924-5490-46d1-80c4-6354cd3c3f87-xtables-lock\") pod \"kube-proxy-zbbsb\" (UID: \"6e217924-5490-46d1-80c4-6354cd3c3f87\") " pod="kube-system/kube-proxy-zbbsb"
	Dec 01 20:07:34 no-preload-240359 kubelet[2222]: I1201 20:07:34.759844    2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8fe8570-bbb2-401d-92b1-9335633eea45-xtables-lock\") pod \"kindnet-s7r55\" (UID: \"a8fe8570-bbb2-401d-92b1-9335633eea45\") " pod="kube-system/kindnet-s7r55"
	Dec 01 20:07:34 no-preload-240359 kubelet[2222]: I1201 20:07:34.759911    2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdm8n\" (UniqueName: \"kubernetes.io/projected/a8fe8570-bbb2-401d-92b1-9335633eea45-kube-api-access-cdm8n\") pod \"kindnet-s7r55\" (UID: \"a8fe8570-bbb2-401d-92b1-9335633eea45\") " pod="kube-system/kindnet-s7r55"
	Dec 01 20:07:34 no-preload-240359 kubelet[2222]: I1201 20:07:34.760016    2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e217924-5490-46d1-80c4-6354cd3c3f87-kube-proxy\") pod \"kube-proxy-zbbsb\" (UID: \"6e217924-5490-46d1-80c4-6354cd3c3f87\") " pod="kube-system/kube-proxy-zbbsb"
	Dec 01 20:07:34 no-preload-240359 kubelet[2222]: I1201 20:07:34.760045    2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgf6n\" (UniqueName: \"kubernetes.io/projected/6e217924-5490-46d1-80c4-6354cd3c3f87-kube-api-access-dgf6n\") pod \"kube-proxy-zbbsb\" (UID: \"6e217924-5490-46d1-80c4-6354cd3c3f87\") " pod="kube-system/kube-proxy-zbbsb"
	Dec 01 20:07:37 no-preload-240359 kubelet[2222]: I1201 20:07:37.403002    2222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-zbbsb" podStartSLOduration=3.40298476 podStartE2EDuration="3.40298476s" podCreationTimestamp="2025-12-01 20:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:07:35.408090112 +0000 UTC m=+6.167622790" watchObservedRunningTime="2025-12-01 20:07:37.40298476 +0000 UTC m=+8.162517425"
	Dec 01 20:07:37 no-preload-240359 kubelet[2222]: I1201 20:07:37.403176    2222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-s7r55" podStartSLOduration=1.5222525070000001 podStartE2EDuration="3.403167267s" podCreationTimestamp="2025-12-01 20:07:34 +0000 UTC" firstStartedPulling="2025-12-01 20:07:34.996719093 +0000 UTC m=+5.756251762" lastFinishedPulling="2025-12-01 20:07:36.877633872 +0000 UTC m=+7.637166522" observedRunningTime="2025-12-01 20:07:37.402905975 +0000 UTC m=+8.162438640" watchObservedRunningTime="2025-12-01 20:07:37.403167267 +0000 UTC m=+8.162699931"
	Dec 01 20:07:37 no-preload-240359 kubelet[2222]: E1201 20:07:37.879754    2222 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-240359" containerName="etcd"
	Dec 01 20:07:40 no-preload-240359 kubelet[2222]: E1201 20:07:40.144845    2222 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-240359" containerName="kube-apiserver"
	Dec 01 20:07:40 no-preload-240359 kubelet[2222]: E1201 20:07:40.397598    2222 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-240359" containerName="kube-apiserver"
	Dec 01 20:07:41 no-preload-240359 kubelet[2222]: E1201 20:07:41.604376    2222 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-240359" containerName="kube-controller-manager"
	Dec 01 20:07:43 no-preload-240359 kubelet[2222]: E1201 20:07:43.890551    2222 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-240359" containerName="kube-scheduler"
	Dec 01 20:07:47 no-preload-240359 kubelet[2222]: E1201 20:07:47.881333    2222 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-240359" containerName="etcd"
	Dec 01 20:07:47 no-preload-240359 kubelet[2222]: I1201 20:07:47.941149    2222 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 01 20:07:48 no-preload-240359 kubelet[2222]: I1201 20:07:48.561618    2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/55ca6ba6-903c-41c3-bf2f-47e674a452bc-tmp\") pod \"storage-provisioner\" (UID: \"55ca6ba6-903c-41c3-bf2f-47e674a452bc\") " pod="kube-system/storage-provisioner"
	Dec 01 20:07:48 no-preload-240359 kubelet[2222]: I1201 20:07:48.561683    2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svnjb\" (UniqueName: \"kubernetes.io/projected/55ca6ba6-903c-41c3-bf2f-47e674a452bc-kube-api-access-svnjb\") pod \"storage-provisioner\" (UID: \"55ca6ba6-903c-41c3-bf2f-47e674a452bc\") " pod="kube-system/storage-provisioner"
	Dec 01 20:07:48 no-preload-240359 kubelet[2222]: I1201 20:07:48.561725    2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63c28884-3390-44f0-ba81-6f221ef923c4-config-volume\") pod \"coredns-7d764666f9-6kzhv\" (UID: \"63c28884-3390-44f0-ba81-6f221ef923c4\") " pod="kube-system/coredns-7d764666f9-6kzhv"
	Dec 01 20:07:48 no-preload-240359 kubelet[2222]: I1201 20:07:48.561752    2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f29hv\" (UniqueName: \"kubernetes.io/projected/63c28884-3390-44f0-ba81-6f221ef923c4-kube-api-access-f29hv\") pod \"coredns-7d764666f9-6kzhv\" (UID: \"63c28884-3390-44f0-ba81-6f221ef923c4\") " pod="kube-system/coredns-7d764666f9-6kzhv"
	Dec 01 20:07:49 no-preload-240359 kubelet[2222]: E1201 20:07:49.422393    2222 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6kzhv" containerName="coredns"
	Dec 01 20:07:49 no-preload-240359 kubelet[2222]: I1201 20:07:49.434824    2222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.434801535 podStartE2EDuration="14.434801535s" podCreationTimestamp="2025-12-01 20:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:07:49.43448136 +0000 UTC m=+20.194014025" watchObservedRunningTime="2025-12-01 20:07:49.434801535 +0000 UTC m=+20.194334203"
	Dec 01 20:07:49 no-preload-240359 kubelet[2222]: I1201 20:07:49.452092    2222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-6kzhv" podStartSLOduration=15.452073262 podStartE2EDuration="15.452073262s" podCreationTimestamp="2025-12-01 20:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:07:49.451649752 +0000 UTC m=+20.211182428" watchObservedRunningTime="2025-12-01 20:07:49.452073262 +0000 UTC m=+20.211605931"
	Dec 01 20:07:50 no-preload-240359 kubelet[2222]: E1201 20:07:50.423809    2222 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6kzhv" containerName="coredns"
	Dec 01 20:07:51 no-preload-240359 kubelet[2222]: E1201 20:07:51.426118    2222 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6kzhv" containerName="coredns"
	Dec 01 20:07:51 no-preload-240359 kubelet[2222]: I1201 20:07:51.484874    2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcfhf\" (UniqueName: \"kubernetes.io/projected/1c152aeb-d4c6-436a-96ef-96d8dff15eba-kube-api-access-gcfhf\") pod \"busybox\" (UID: \"1c152aeb-d4c6-436a-96ef-96d8dff15eba\") " pod="default/busybox"
	Dec 01 20:07:53 no-preload-240359 kubelet[2222]: I1201 20:07:53.446268    2222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.183707868 podStartE2EDuration="2.446245224s" podCreationTimestamp="2025-12-01 20:07:51 +0000 UTC" firstStartedPulling="2025-12-01 20:07:51.745854248 +0000 UTC m=+22.505386893" lastFinishedPulling="2025-12-01 20:07:53.008391603 +0000 UTC m=+23.767924249" observedRunningTime="2025-12-01 20:07:53.445868215 +0000 UTC m=+24.205400881" watchObservedRunningTime="2025-12-01 20:07:53.446245224 +0000 UTC m=+24.205777890"
	
	
	==> storage-provisioner [1bd5bec372748877ee58bc5de96b6a3b076020a3cfb7dcb571428a05cd9822e5] <==
	I1201 20:07:49.001839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1201 20:07:49.015405       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1201 20:07:49.015585       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1201 20:07:49.019103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:07:49.025617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:07:49.026022       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1201 20:07:49.026191       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8fcc8b66-6889-4a42-8e02-82e3bfaf2063", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-240359_ff570be3-0f69-4a60-bf14-5e2f2f3db504 became leader
	I1201 20:07:49.026368       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-240359_ff570be3-0f69-4a60-bf14-5e2f2f3db504!
	W1201 20:07:49.036317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:07:49.047351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:07:49.127596       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-240359_ff570be3-0f69-4a60-bf14-5e2f2f3db504!
	W1201 20:07:51.051173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:07:51.056108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:07:53.058664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:07:53.062220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:07:55.066316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:07:55.070446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:07:57.073729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:07:57.080594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:07:59.083782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:07:59.087773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:01.091379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:01.095610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-240359 -n no-preload-240359
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-240359 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-990820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-990820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (247.404808ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:08:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-990820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-990820 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-990820 describe deploy/metrics-server -n kube-system: exit status 1 (58.531255ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-990820 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-990820
helpers_test.go:243: (dbg) docker inspect embed-certs-990820:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808",
	        "Created": "2025-12-01T20:07:26.934282918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 336217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:07:26.985720535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808/hostname",
	        "HostsPath": "/var/lib/docker/containers/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808/hosts",
	        "LogPath": "/var/lib/docker/containers/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808-json.log",
	        "Name": "/embed-certs-990820",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-990820:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-990820",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808",
	                "LowerDir": "/var/lib/docker/overlay2/52d51afa65c81dabd7f00215b748ac0eae1c8359aa330a8b7abd7675dda733af-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/52d51afa65c81dabd7f00215b748ac0eae1c8359aa330a8b7abd7675dda733af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/52d51afa65c81dabd7f00215b748ac0eae1c8359aa330a8b7abd7675dda733af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/52d51afa65c81dabd7f00215b748ac0eae1c8359aa330a8b7abd7675dda733af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-990820",
	                "Source": "/var/lib/docker/volumes/embed-certs-990820/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-990820",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-990820",
	                "name.minikube.sigs.k8s.io": "embed-certs-990820",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8940687e2155afbaf2577f1ff438ae464068c42e8605e20c3b0a0bd7a8d6e170",
	            "SandboxKey": "/var/run/docker/netns/8940687e2155",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-990820": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f73505fd10b0a75826b9bbfa88683343c0777746fa3af258502ff4a892fc61da",
	                    "EndpointID": "8a3e966420367df548983c2c7e6e2187d7ffa028edbaefe7cfba78a0f4a90bda",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "06:70:b7:fd:4b:71",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-990820",
	                        "30c5f9257afd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-990820 -n embed-certs-990820
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-990820 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-990820 logs -n 25: (1.02932562s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-551864 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo docker system info                                                                                                                                                                                                      │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo containerd config dump                                                                                                                                                                                                  │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo crio config                                                                                                                                                                                                             │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p bridge-551864                                                                                                                                                                                                                              │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-003720                                                                                                                                                                                                               │ disable-driver-mounts-003720 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-217464 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p old-k8s-version-217464 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-240359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p no-preload-240359 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-990820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:07:49
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:07:49.303462  345040 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:07:49.303566  345040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:07:49.303571  345040 out.go:374] Setting ErrFile to fd 2...
	I1201 20:07:49.303578  345040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:07:49.303823  345040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:07:49.304415  345040 out.go:368] Setting JSON to false
	I1201 20:07:49.305845  345040 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6620,"bootTime":1764613049,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:07:49.305912  345040 start.go:143] virtualization: kvm guest
	I1201 20:07:49.307736  345040 out.go:179] * [old-k8s-version-217464] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:07:49.309102  345040 notify.go:221] Checking for updates...
	I1201 20:07:49.309128  345040 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:07:49.313638  345040 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:07:49.315081  345040 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:07:49.316313  345040 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:07:49.317405  345040 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:07:49.318653  345040 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:07:49.320352  345040 config.go:182] Loaded profile config "old-k8s-version-217464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1201 20:07:49.322337  345040 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1201 20:07:49.323580  345040 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:07:49.352531  345040 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:07:49.352623  345040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:07:49.428196  345040 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-01 20:07:49.413959728 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:07:49.428367  345040 docker.go:319] overlay module found
	I1201 20:07:49.431324  345040 out.go:179] * Using the docker driver based on existing profile
	I1201 20:07:49.432939  345040 start.go:309] selected driver: docker
	I1201 20:07:49.432959  345040 start.go:927] validating driver "docker" against &{Name:old-k8s-version-217464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-217464 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:07:49.433199  345040 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:07:49.434320  345040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:07:49.521829  345040 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-01 20:07:49.507281493 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:07:49.522190  345040 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:07:49.522223  345040 cni.go:84] Creating CNI manager for ""
	I1201 20:07:49.522309  345040 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:07:49.522359  345040 start.go:353] cluster config:
	{Name:old-k8s-version-217464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-217464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:07:49.524466  345040 out.go:179] * Starting "old-k8s-version-217464" primary control-plane node in "old-k8s-version-217464" cluster
	I1201 20:07:49.527071  345040 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:07:49.528519  345040 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:07:49.529745  345040 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1201 20:07:49.529783  345040 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1201 20:07:49.529792  345040 cache.go:65] Caching tarball of preloaded images
	I1201 20:07:49.529885  345040 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:07:49.529895  345040 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1201 20:07:49.530016  345040 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/config.json ...
	I1201 20:07:49.530019  345040 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:07:49.555746  345040 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:07:49.555769  345040 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:07:49.555788  345040 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:07:49.555831  345040 start.go:360] acquireMachinesLock for old-k8s-version-217464: {Name:mkc4365980251c10c3c1ecbb8bf9a930e1d6a78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:07:49.555898  345040 start.go:364] duration metric: took 43.667µs to acquireMachinesLock for "old-k8s-version-217464"
	I1201 20:07:49.555919  345040 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:07:49.555928  345040 fix.go:54] fixHost starting: 
	I1201 20:07:49.556191  345040 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:07:49.578800  345040 fix.go:112] recreateIfNeeded on old-k8s-version-217464: state=Stopped err=<nil>
	W1201 20:07:49.578828  345040 fix.go:138] unexpected machine state, will restart: <nil>
	W1201 20:07:45.945877  327969 node_ready.go:57] node "no-preload-240359" has "Ready":"False" status (will retry)
	W1201 20:07:48.046722  327969 node_ready.go:57] node "no-preload-240359" has "Ready":"False" status (will retry)
	I1201 20:07:48.465107  327969 node_ready.go:49] node "no-preload-240359" is "Ready"
	I1201 20:07:48.465145  327969 node_ready.go:38] duration metric: took 13.522447122s for node "no-preload-240359" to be "Ready" ...
	I1201 20:07:48.465161  327969 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:07:48.465219  327969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:07:48.482277  327969 api_server.go:72] duration metric: took 13.892147682s to wait for apiserver process to appear ...
	I1201 20:07:48.482332  327969 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:07:48.482355  327969 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 20:07:48.554505  327969 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1201 20:07:48.556110  327969 api_server.go:141] control plane version: v1.35.0-beta.0
	I1201 20:07:48.556137  327969 api_server.go:131] duration metric: took 73.797467ms to wait for apiserver health ...
	I1201 20:07:48.556148  327969 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:07:48.562079  327969 system_pods.go:59] 8 kube-system pods found
	I1201 20:07:48.562129  327969 system_pods.go:61] "coredns-7d764666f9-6kzhv" [63c28884-3390-44f0-ba81-6f221ef923c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:07:48.562138  327969 system_pods.go:61] "etcd-no-preload-240359" [36c1e813-01d2-4cb6-a36b-50d4026fdac2] Running
	I1201 20:07:48.562147  327969 system_pods.go:61] "kindnet-s7r55" [a8fe8570-bbb2-401d-92b1-9335633eea45] Running
	I1201 20:07:48.562153  327969 system_pods.go:61] "kube-apiserver-no-preload-240359" [a5fc590d-d925-47b0-b6d9-9f1e418f59e5] Running
	I1201 20:07:48.562158  327969 system_pods.go:61] "kube-controller-manager-no-preload-240359" [f334535d-397f-428e-a4da-4f64d87d3283] Running
	I1201 20:07:48.562163  327969 system_pods.go:61] "kube-proxy-zbbsb" [6e217924-5490-46d1-80c4-6354cd3c3f87] Running
	I1201 20:07:48.562168  327969 system_pods.go:61] "kube-scheduler-no-preload-240359" [4c309867-a6ed-4318-86a1-cba4e8178d41] Running
	I1201 20:07:48.562173  327969 system_pods.go:61] "storage-provisioner" [55ca6ba6-903c-41c3-bf2f-47e674a452bc] Pending
	I1201 20:07:48.562180  327969 system_pods.go:74] duration metric: took 6.025957ms to wait for pod list to return data ...
	I1201 20:07:48.562190  327969 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:07:48.579495  327969 default_sa.go:45] found service account: "default"
	I1201 20:07:48.579547  327969 default_sa.go:55] duration metric: took 17.349787ms for default service account to be created ...
	I1201 20:07:48.579569  327969 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:07:48.583567  327969 system_pods.go:86] 8 kube-system pods found
	I1201 20:07:48.583642  327969 system_pods.go:89] "coredns-7d764666f9-6kzhv" [63c28884-3390-44f0-ba81-6f221ef923c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:07:48.583650  327969 system_pods.go:89] "etcd-no-preload-240359" [36c1e813-01d2-4cb6-a36b-50d4026fdac2] Running
	I1201 20:07:48.583661  327969 system_pods.go:89] "kindnet-s7r55" [a8fe8570-bbb2-401d-92b1-9335633eea45] Running
	I1201 20:07:48.583667  327969 system_pods.go:89] "kube-apiserver-no-preload-240359" [a5fc590d-d925-47b0-b6d9-9f1e418f59e5] Running
	I1201 20:07:48.583672  327969 system_pods.go:89] "kube-controller-manager-no-preload-240359" [f334535d-397f-428e-a4da-4f64d87d3283] Running
	I1201 20:07:48.583677  327969 system_pods.go:89] "kube-proxy-zbbsb" [6e217924-5490-46d1-80c4-6354cd3c3f87] Running
	I1201 20:07:48.583683  327969 system_pods.go:89] "kube-scheduler-no-preload-240359" [4c309867-a6ed-4318-86a1-cba4e8178d41] Running
	I1201 20:07:48.583737  327969 system_pods.go:89] "storage-provisioner" [55ca6ba6-903c-41c3-bf2f-47e674a452bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:07:48.583791  327969 retry.go:31] will retry after 219.203786ms: missing components: kube-dns
	I1201 20:07:48.807776  327969 system_pods.go:86] 8 kube-system pods found
	I1201 20:07:48.807812  327969 system_pods.go:89] "coredns-7d764666f9-6kzhv" [63c28884-3390-44f0-ba81-6f221ef923c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:07:48.807821  327969 system_pods.go:89] "etcd-no-preload-240359" [36c1e813-01d2-4cb6-a36b-50d4026fdac2] Running
	I1201 20:07:48.807828  327969 system_pods.go:89] "kindnet-s7r55" [a8fe8570-bbb2-401d-92b1-9335633eea45] Running
	I1201 20:07:48.807833  327969 system_pods.go:89] "kube-apiserver-no-preload-240359" [a5fc590d-d925-47b0-b6d9-9f1e418f59e5] Running
	I1201 20:07:48.807839  327969 system_pods.go:89] "kube-controller-manager-no-preload-240359" [f334535d-397f-428e-a4da-4f64d87d3283] Running
	I1201 20:07:48.807846  327969 system_pods.go:89] "kube-proxy-zbbsb" [6e217924-5490-46d1-80c4-6354cd3c3f87] Running
	I1201 20:07:48.807855  327969 system_pods.go:89] "kube-scheduler-no-preload-240359" [4c309867-a6ed-4318-86a1-cba4e8178d41] Running
	I1201 20:07:48.807863  327969 system_pods.go:89] "storage-provisioner" [55ca6ba6-903c-41c3-bf2f-47e674a452bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:07:48.807883  327969 retry.go:31] will retry after 343.152219ms: missing components: kube-dns
	I1201 20:07:49.155137  327969 system_pods.go:86] 8 kube-system pods found
	I1201 20:07:49.155177  327969 system_pods.go:89] "coredns-7d764666f9-6kzhv" [63c28884-3390-44f0-ba81-6f221ef923c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:07:49.155188  327969 system_pods.go:89] "etcd-no-preload-240359" [36c1e813-01d2-4cb6-a36b-50d4026fdac2] Running
	I1201 20:07:49.155197  327969 system_pods.go:89] "kindnet-s7r55" [a8fe8570-bbb2-401d-92b1-9335633eea45] Running
	I1201 20:07:49.155203  327969 system_pods.go:89] "kube-apiserver-no-preload-240359" [a5fc590d-d925-47b0-b6d9-9f1e418f59e5] Running
	I1201 20:07:49.155363  327969 system_pods.go:89] "kube-controller-manager-no-preload-240359" [f334535d-397f-428e-a4da-4f64d87d3283] Running
	I1201 20:07:49.155397  327969 system_pods.go:89] "kube-proxy-zbbsb" [6e217924-5490-46d1-80c4-6354cd3c3f87] Running
	I1201 20:07:49.155404  327969 system_pods.go:89] "kube-scheduler-no-preload-240359" [4c309867-a6ed-4318-86a1-cba4e8178d41] Running
	I1201 20:07:49.155414  327969 system_pods.go:89] "storage-provisioner" [55ca6ba6-903c-41c3-bf2f-47e674a452bc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:07:49.155434  327969 retry.go:31] will retry after 430.693782ms: missing components: kube-dns
	I1201 20:07:49.591988  327969 system_pods.go:86] 8 kube-system pods found
	I1201 20:07:49.592022  327969 system_pods.go:89] "coredns-7d764666f9-6kzhv" [63c28884-3390-44f0-ba81-6f221ef923c4] Running
	I1201 20:07:49.592031  327969 system_pods.go:89] "etcd-no-preload-240359" [36c1e813-01d2-4cb6-a36b-50d4026fdac2] Running
	I1201 20:07:49.592036  327969 system_pods.go:89] "kindnet-s7r55" [a8fe8570-bbb2-401d-92b1-9335633eea45] Running
	I1201 20:07:49.592042  327969 system_pods.go:89] "kube-apiserver-no-preload-240359" [a5fc590d-d925-47b0-b6d9-9f1e418f59e5] Running
	I1201 20:07:49.592048  327969 system_pods.go:89] "kube-controller-manager-no-preload-240359" [f334535d-397f-428e-a4da-4f64d87d3283] Running
	I1201 20:07:49.592053  327969 system_pods.go:89] "kube-proxy-zbbsb" [6e217924-5490-46d1-80c4-6354cd3c3f87] Running
	I1201 20:07:49.592058  327969 system_pods.go:89] "kube-scheduler-no-preload-240359" [4c309867-a6ed-4318-86a1-cba4e8178d41] Running
	I1201 20:07:49.592063  327969 system_pods.go:89] "storage-provisioner" [55ca6ba6-903c-41c3-bf2f-47e674a452bc] Running
	I1201 20:07:49.592073  327969 system_pods.go:126] duration metric: took 1.012496766s to wait for k8s-apps to be running ...
	I1201 20:07:49.592083  327969 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:07:49.592140  327969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:07:49.615349  327969 system_svc.go:56] duration metric: took 23.256986ms WaitForService to wait for kubelet
	I1201 20:07:49.615385  327969 kubeadm.go:587] duration metric: took 15.025259573s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:07:49.615407  327969 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:07:49.623831  327969 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:07:49.623867  327969 node_conditions.go:123] node cpu capacity is 8
	I1201 20:07:49.623888  327969 node_conditions.go:105] duration metric: took 8.476008ms to run NodePressure ...
	I1201 20:07:49.623904  327969 start.go:242] waiting for startup goroutines ...
	I1201 20:07:49.623913  327969 start.go:247] waiting for cluster config update ...
	I1201 20:07:49.623925  327969 start.go:256] writing updated cluster config ...
	I1201 20:07:49.624226  327969 ssh_runner.go:195] Run: rm -f paused
	I1201 20:07:49.631257  327969 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:07:49.637794  327969 pod_ready.go:83] waiting for pod "coredns-7d764666f9-6kzhv" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:49.644835  327969 pod_ready.go:94] pod "coredns-7d764666f9-6kzhv" is "Ready"
	I1201 20:07:49.644864  327969 pod_ready.go:86] duration metric: took 7.040491ms for pod "coredns-7d764666f9-6kzhv" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:49.647550  327969 pod_ready.go:83] waiting for pod "etcd-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:49.652325  327969 pod_ready.go:94] pod "etcd-no-preload-240359" is "Ready"
	I1201 20:07:49.652350  327969 pod_ready.go:86] duration metric: took 4.773464ms for pod "etcd-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:49.654722  327969 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:49.659337  327969 pod_ready.go:94] pod "kube-apiserver-no-preload-240359" is "Ready"
	I1201 20:07:49.659359  327969 pod_ready.go:86] duration metric: took 4.616227ms for pod "kube-apiserver-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:49.661688  327969 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:50.036111  327969 pod_ready.go:94] pod "kube-controller-manager-no-preload-240359" is "Ready"
	I1201 20:07:50.036143  327969 pod_ready.go:86] duration metric: took 374.426482ms for pod "kube-controller-manager-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:50.236198  327969 pod_ready.go:83] waiting for pod "kube-proxy-zbbsb" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:50.636159  327969 pod_ready.go:94] pod "kube-proxy-zbbsb" is "Ready"
	I1201 20:07:50.636185  327969 pod_ready.go:86] duration metric: took 399.964847ms for pod "kube-proxy-zbbsb" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:50.836213  327969 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:51.235530  327969 pod_ready.go:94] pod "kube-scheduler-no-preload-240359" is "Ready"
	I1201 20:07:51.235556  327969 pod_ready.go:86] duration metric: took 399.321978ms for pod "kube-scheduler-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:07:51.235568  327969 pod_ready.go:40] duration metric: took 1.604263583s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:07:51.278178  327969 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:07:51.280211  327969 out.go:179] * Done! kubectl is now configured to use "no-preload-240359" cluster and "default" namespace by default
	I1201 20:07:49.038105  335220 addons.go:530] duration metric: took 774.908267ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1201 20:07:49.261733  335220 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-990820" context rescaled to 1 replicas
	W1201 20:07:50.827430  335220 node_ready.go:57] node "embed-certs-990820" has "Ready":"False" status (will retry)
	I1201 20:07:49.330223  343871 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Running}}
	I1201 20:07:49.352963  343871 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:07:49.377396  343871 cli_runner.go:164] Run: docker exec default-k8s-diff-port-009682 stat /var/lib/dpkg/alternatives/iptables
	I1201 20:07:49.445148  343871 oci.go:144] the created container "default-k8s-diff-port-009682" has a running status.
	I1201 20:07:49.445186  343871 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa...
	I1201 20:07:49.482681  343871 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1201 20:07:49.519173  343871 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:07:49.544694  343871 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1201 20:07:49.544712  343871 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-009682 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1201 20:07:49.592938  343871 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:07:49.620059  343871 machine.go:94] provisionDockerMachine start ...
	I1201 20:07:49.620169  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:49.647691  343871 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:49.648016  343871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1201 20:07:49.648033  343871 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:07:49.648789  343871 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41656->127.0.0.1:33108: read: connection reset by peer
	I1201 20:07:52.790039  343871 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-009682
	
	I1201 20:07:52.790067  343871 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-009682"
	I1201 20:07:52.790146  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:52.808047  343871 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:52.808332  343871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1201 20:07:52.808353  343871 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-009682 && echo "default-k8s-diff-port-009682" | sudo tee /etc/hostname
	I1201 20:07:52.964935  343871 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-009682
	
	I1201 20:07:52.965034  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:52.986454  343871 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:52.986724  343871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1201 20:07:52.986747  343871 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-009682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-009682/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-009682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:07:53.129794  343871 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:07:53.129817  343871 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:07:53.129858  343871 ubuntu.go:190] setting up certificates
	I1201 20:07:53.129869  343871 provision.go:84] configureAuth start
	I1201 20:07:53.129928  343871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:07:53.148793  343871 provision.go:143] copyHostCerts
	I1201 20:07:53.148863  343871 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:07:53.148877  343871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:07:53.148968  343871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:07:53.149080  343871 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:07:53.149088  343871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:07:53.149119  343871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:07:53.149175  343871 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:07:53.149183  343871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:07:53.149206  343871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:07:53.149254  343871 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-009682 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-009682 localhost minikube]
	I1201 20:07:53.257229  343871 provision.go:177] copyRemoteCerts
	I1201 20:07:53.257299  343871 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:07:53.257351  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:53.278599  343871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:07:53.380552  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:07:53.399580  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1201 20:07:53.416478  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:07:53.436758  343871 provision.go:87] duration metric: took 306.875067ms to configureAuth
	I1201 20:07:53.436788  343871 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:07:53.436997  343871 config.go:182] Loaded profile config "default-k8s-diff-port-009682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:07:53.437136  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:53.457678  343871 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:53.457982  343871 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1201 20:07:53.458008  343871 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:07:53.741692  343871 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:07:53.741716  343871 machine.go:97] duration metric: took 4.121631094s to provisionDockerMachine
	I1201 20:07:53.741729  343871 client.go:176] duration metric: took 9.423790827s to LocalClient.Create
	I1201 20:07:53.741751  343871 start.go:167] duration metric: took 9.423858779s to libmachine.API.Create "default-k8s-diff-port-009682"
	I1201 20:07:53.741763  343871 start.go:293] postStartSetup for "default-k8s-diff-port-009682" (driver="docker")
	I1201 20:07:53.741779  343871 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:07:53.741851  343871 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:07:53.741885  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:53.760146  343871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:07:53.861604  343871 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:07:53.865388  343871 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:07:53.865421  343871 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:07:53.865434  343871 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:07:53.865500  343871 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:07:53.865602  343871 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:07:53.865745  343871 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:07:53.873537  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:07:53.895139  343871 start.go:296] duration metric: took 153.35865ms for postStartSetup
	I1201 20:07:53.895534  343871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:07:53.914227  343871 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/config.json ...
	I1201 20:07:53.914495  343871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:07:53.914544  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:53.932721  343871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:07:54.030621  343871 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:07:54.036397  343871 start.go:128] duration metric: took 9.720648247s to createHost
	I1201 20:07:54.036423  343871 start.go:83] releasing machines lock for "default-k8s-diff-port-009682", held for 9.720781142s
	I1201 20:07:54.036484  343871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:07:54.056643  343871 ssh_runner.go:195] Run: cat /version.json
	I1201 20:07:54.056691  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:54.056770  343871 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:07:54.056849  343871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:07:54.077058  343871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:07:54.077850  343871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:07:49.580558  345040 out.go:252] * Restarting existing docker container for "old-k8s-version-217464" ...
	I1201 20:07:49.580634  345040 cli_runner.go:164] Run: docker start old-k8s-version-217464
	I1201 20:07:49.878450  345040 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:07:49.898541  345040 kic.go:430] container "old-k8s-version-217464" state is running.
	I1201 20:07:49.898970  345040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-217464
	I1201 20:07:49.920515  345040 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/config.json ...
	I1201 20:07:49.920741  345040 machine.go:94] provisionDockerMachine start ...
	I1201 20:07:49.920796  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:49.942851  345040 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:49.943076  345040 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1201 20:07:49.943089  345040 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:07:49.943770  345040 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41098->127.0.0.1:33113: read: connection reset by peer
	I1201 20:07:53.092863  345040 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-217464
	
	I1201 20:07:53.092889  345040 ubuntu.go:182] provisioning hostname "old-k8s-version-217464"
	I1201 20:07:53.092934  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:53.110764  345040 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:53.110972  345040 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1201 20:07:53.110984  345040 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-217464 && echo "old-k8s-version-217464" | sudo tee /etc/hostname
	I1201 20:07:53.260609  345040 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-217464
	
	I1201 20:07:53.260683  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:53.279956  345040 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:53.280176  345040 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1201 20:07:53.280191  345040 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-217464' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-217464/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-217464' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:07:53.419880  345040 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:07:53.419909  345040 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:07:53.419960  345040 ubuntu.go:190] setting up certificates
	I1201 20:07:53.419987  345040 provision.go:84] configureAuth start
	I1201 20:07:53.420045  345040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-217464
	I1201 20:07:53.440584  345040 provision.go:143] copyHostCerts
	I1201 20:07:53.440638  345040 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:07:53.440646  345040 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:07:53.440708  345040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:07:53.440823  345040 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:07:53.440834  345040 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:07:53.440894  345040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:07:53.441039  345040 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:07:53.441052  345040 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:07:53.441097  345040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:07:53.441174  345040 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-217464 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-217464]
	I1201 20:07:53.518882  345040 provision.go:177] copyRemoteCerts
	I1201 20:07:53.518940  345040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:07:53.518971  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:53.539126  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:53.640027  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:07:53.660835  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1201 20:07:53.678858  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1201 20:07:53.696471  345040 provision.go:87] duration metric: took 276.468842ms to configureAuth
	I1201 20:07:53.696493  345040 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:07:53.696665  345040 config.go:182] Loaded profile config "old-k8s-version-217464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1201 20:07:53.696777  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:53.716101  345040 main.go:143] libmachine: Using SSH client type: native
	I1201 20:07:53.716339  345040 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1201 20:07:53.716357  345040 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:07:54.035977  345040 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:07:54.036009  345040 machine.go:97] duration metric: took 4.115255697s to provisionDockerMachine
	I1201 20:07:54.036023  345040 start.go:293] postStartSetup for "old-k8s-version-217464" (driver="docker")
	I1201 20:07:54.036036  345040 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:07:54.036108  345040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:07:54.036163  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:54.056907  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:54.159946  345040 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:07:54.163424  345040 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:07:54.163465  345040 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:07:54.163480  345040 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:07:54.163532  345040 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:07:54.163636  345040 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:07:54.163768  345040 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:07:54.171506  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:07:54.189265  345040 start.go:296] duration metric: took 153.227057ms for postStartSetup
	I1201 20:07:54.189372  345040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:07:54.189413  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:54.209404  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:54.232560  343871 ssh_runner.go:195] Run: systemctl --version
	I1201 20:07:54.239369  343871 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:07:54.275426  343871 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:07:54.280387  343871 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:07:54.280439  343871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:07:54.306355  343871 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1201 20:07:54.306374  343871 start.go:496] detecting cgroup driver to use...
	I1201 20:07:54.306407  343871 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:07:54.306454  343871 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:07:54.322496  343871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:07:54.337536  343871 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:07:54.337595  343871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:07:54.357441  343871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:07:54.375811  343871 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:07:54.472153  343871 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:07:54.570800  343871 docker.go:234] disabling docker service ...
	I1201 20:07:54.570860  343871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:07:54.591870  343871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:07:54.604849  343871 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:07:54.699578  343871 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:07:54.797563  343871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:07:54.810655  343871 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:07:54.825158  343871 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:07:54.825218  343871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.835785  343871 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:07:54.835842  343871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.844798  343871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.856705  343871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.866860  343871 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:07:54.875211  343871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.884251  343871 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.897922  343871 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:54.907503  343871 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:07:54.915549  343871 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:07:54.923004  343871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:07:55.003651  343871 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:07:55.161627  343871 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:07:55.161691  343871 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:07:55.165765  343871 start.go:564] Will wait 60s for crictl version
	I1201 20:07:55.165818  343871 ssh_runner.go:195] Run: which crictl
	I1201 20:07:55.169554  343871 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:07:55.196315  343871 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:07:55.196405  343871 ssh_runner.go:195] Run: crio --version
	I1201 20:07:55.225245  343871 ssh_runner.go:195] Run: crio --version
	I1201 20:07:55.254328  343871 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 20:07:54.306181  345040 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:07:54.311115  345040 fix.go:56] duration metric: took 4.755182827s for fixHost
	I1201 20:07:54.311139  345040 start.go:83] releasing machines lock for "old-k8s-version-217464", held for 4.75522957s
	I1201 20:07:54.311188  345040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-217464
	I1201 20:07:54.331137  345040 ssh_runner.go:195] Run: cat /version.json
	I1201 20:07:54.331206  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:54.331225  345040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:07:54.331323  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:54.351091  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:54.352165  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:54.448457  345040 ssh_runner.go:195] Run: systemctl --version
	I1201 20:07:54.504933  345040 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:07:54.549146  345040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:07:54.554144  345040 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:07:54.554199  345040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:07:54.562889  345040 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:07:54.562912  345040 start.go:496] detecting cgroup driver to use...
	I1201 20:07:54.562937  345040 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:07:54.562969  345040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:07:54.578133  345040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:07:54.591629  345040 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:07:54.591698  345040 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:07:54.607915  345040 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:07:54.620877  345040 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:07:54.709963  345040 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:07:54.795738  345040 docker.go:234] disabling docker service ...
	I1201 20:07:54.795804  345040 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:07:54.811814  345040 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:07:54.824350  345040 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:07:54.910089  345040 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:07:55.000161  345040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:07:55.013327  345040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:07:55.028735  345040 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1201 20:07:55.028792  345040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.037931  345040 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:07:55.037983  345040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.046682  345040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.056313  345040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.065476  345040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:07:55.074420  345040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.083356  345040 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.092454  345040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:07:55.103551  345040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:07:55.111538  345040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:07:55.118797  345040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:07:55.206454  345040 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:07:55.355748  345040 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:07:55.355826  345040 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:07:55.360030  345040 start.go:564] Will wait 60s for crictl version
	I1201 20:07:55.360087  345040 ssh_runner.go:195] Run: which crictl
	I1201 20:07:55.363806  345040 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:07:55.389953  345040 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:07:55.390053  345040 ssh_runner.go:195] Run: crio --version
	I1201 20:07:55.420750  345040 ssh_runner.go:195] Run: crio --version
	I1201 20:07:55.454432  345040 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	W1201 20:07:52.828180  335220 node_ready.go:57] node "embed-certs-990820" has "Ready":"False" status (will retry)
	W1201 20:07:55.327898  335220 node_ready.go:57] node "embed-certs-990820" has "Ready":"False" status (will retry)
	I1201 20:07:55.455704  345040 cli_runner.go:164] Run: docker network inspect old-k8s-version-217464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:07:55.475323  345040 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1201 20:07:55.479564  345040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:07:55.489945  345040 kubeadm.go:884] updating cluster {Name:old-k8s-version-217464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-217464 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:07:55.490076  345040 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1201 20:07:55.490147  345040 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:07:55.530140  345040 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:07:55.530166  345040 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:07:55.530218  345040 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:07:55.557241  345040 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:07:55.557268  345040 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:07:55.557283  345040 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1201 20:07:55.557468  345040 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-217464 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-217464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:07:55.557563  345040 ssh_runner.go:195] Run: crio config
	I1201 20:07:55.605791  345040 cni.go:84] Creating CNI manager for ""
	I1201 20:07:55.605818  345040 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:07:55.605835  345040 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:07:55.605859  345040 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-217464 NodeName:old-k8s-version-217464 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:07:55.606080  345040 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-217464"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:07:55.606163  345040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1201 20:07:55.614793  345040 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:07:55.614853  345040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:07:55.623764  345040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1201 20:07:55.636737  345040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:07:55.649368  345040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1201 20:07:55.662550  345040 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:07:55.666352  345040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:07:55.676964  345040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:07:55.762132  345040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:07:55.792780  345040 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464 for IP: 192.168.76.2
	I1201 20:07:55.792802  345040 certs.go:195] generating shared ca certs ...
	I1201 20:07:55.792822  345040 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.792977  345040 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:07:55.793032  345040 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:07:55.793044  345040 certs.go:257] generating profile certs ...
	I1201 20:07:55.793166  345040 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/client.key
	I1201 20:07:55.793248  345040 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/apiserver.key.6b4b6768
	I1201 20:07:55.793332  345040 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/proxy-client.key
	I1201 20:07:55.793478  345040 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:07:55.793523  345040 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:07:55.793535  345040 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:07:55.793571  345040 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:07:55.793605  345040 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:07:55.793636  345040 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:07:55.793699  345040 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:07:55.794463  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:07:55.813962  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:07:55.833693  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:07:55.853214  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:07:55.874583  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1201 20:07:55.897128  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:07:55.915812  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:07:55.934097  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/old-k8s-version-217464/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 20:07:55.951563  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:07:55.968980  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:07:55.986866  345040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:07:56.006254  345040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:07:56.019243  345040 ssh_runner.go:195] Run: openssl version
	I1201 20:07:56.025496  345040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:07:56.034731  345040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:56.038605  345040 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:56.038665  345040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:56.074232  345040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:07:56.083209  345040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:07:56.091774  345040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:07:56.095818  345040 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:07:56.095874  345040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:07:56.131547  345040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:07:56.139470  345040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:07:56.148347  345040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:07:56.152876  345040 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:07:56.152927  345040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:07:56.189474  345040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:07:56.198393  345040 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:07:56.202348  345040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:07:56.237949  345040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:07:56.273429  345040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:07:56.318973  345040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:07:56.370231  345040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:07:56.422011  345040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:07:56.488771  345040 kubeadm.go:401] StartCluster: {Name:old-k8s-version-217464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-217464 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:07:56.488879  345040 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:07:56.488937  345040 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:07:56.527703  345040 cri.go:89] found id: "9d50552004acc3398e94380698eb07bb142aa3e02f7fbe0cc985eae7f0f37421"
	I1201 20:07:56.527722  345040 cri.go:89] found id: "50a711978543faddbcd266e3bb43a6bebfd689f26e2a35fcfedb4e228ede9591"
	I1201 20:07:56.527726  345040 cri.go:89] found id: "4649c73be5eb94a99d98990312bb2e4e017cd402e18aca29e4f14aacf404c25f"
	I1201 20:07:56.527731  345040 cri.go:89] found id: "604c30dbad503e870547eb7624c394a7a220a65ecf82f3dccc6f24eca1a93428"
	I1201 20:07:56.527747  345040 cri.go:89] found id: ""
	I1201 20:07:56.527782  345040 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:07:56.540424  345040 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:07:56Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:07:56.540499  345040 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:07:56.551101  345040 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:07:56.551121  345040 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:07:56.551212  345040 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:07:56.559726  345040 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:07:56.560571  345040 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-217464" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:07:56.561282  345040 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-217464" cluster setting kubeconfig missing "old-k8s-version-217464" context setting]
	I1201 20:07:56.562609  345040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:56.564740  345040 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:07:56.573859  345040 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1201 20:07:56.573888  345040 kubeadm.go:602] duration metric: took 22.751807ms to restartPrimaryControlPlane
	I1201 20:07:56.573897  345040 kubeadm.go:403] duration metric: took 85.13563ms to StartCluster
	I1201 20:07:56.573914  345040 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:56.573974  345040 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:07:56.576130  345040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:56.576672  345040 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:07:56.576707  345040 config.go:182] Loaded profile config "old-k8s-version-217464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1201 20:07:56.577037  345040 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:07:56.577146  345040 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-217464"
	I1201 20:07:56.577165  345040 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-217464"
	I1201 20:07:56.577163  345040 addons.go:70] Setting dashboard=true in profile "old-k8s-version-217464"
	W1201 20:07:56.577173  345040 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:07:56.577187  345040 addons.go:239] Setting addon dashboard=true in "old-k8s-version-217464"
	W1201 20:07:56.577197  345040 addons.go:248] addon dashboard should already be in state true
	I1201 20:07:56.577204  345040 host.go:66] Checking if "old-k8s-version-217464" exists ...
	I1201 20:07:56.577231  345040 host.go:66] Checking if "old-k8s-version-217464" exists ...
	I1201 20:07:56.577189  345040 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-217464"
	I1201 20:07:56.577334  345040 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-217464"
	I1201 20:07:56.577729  345040 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:07:56.577758  345040 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:07:56.578107  345040 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:07:56.581708  345040 out.go:179] * Verifying Kubernetes components...
	I1201 20:07:56.583053  345040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:07:56.605382  345040 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:07:56.605790  345040 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-217464"
	W1201 20:07:56.605813  345040 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:07:56.605841  345040 host.go:66] Checking if "old-k8s-version-217464" exists ...
	I1201 20:07:56.606387  345040 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:07:56.606610  345040 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:07:56.606628  345040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:07:56.606673  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:56.611193  345040 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:07:56.612471  345040 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:07:55.255649  343871 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-009682 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:07:55.275029  343871 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1201 20:07:55.279251  343871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:07:55.290100  343871 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:07:55.290215  343871 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:07:55.290262  343871 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:07:55.324234  343871 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:07:55.324253  343871 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:07:55.324309  343871 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:07:55.349974  343871 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:07:55.349997  343871 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:07:55.350006  343871 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1201 20:07:55.350178  343871 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-009682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:07:55.350243  343871 ssh_runner.go:195] Run: crio config
	I1201 20:07:55.398778  343871 cni.go:84] Creating CNI manager for ""
	I1201 20:07:55.398800  343871 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:07:55.398818  343871 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:07:55.398883  343871 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-009682 NodeName:default-k8s-diff-port-009682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:07:55.399004  343871 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-009682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:07:55.399079  343871 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:07:55.407875  343871 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:07:55.407961  343871 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:07:55.416610  343871 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1201 20:07:55.431116  343871 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:07:55.448684  343871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1201 20:07:55.463113  343871 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:07:55.467424  343871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:07:55.478713  343871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:07:55.562581  343871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:07:55.589935  343871 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682 for IP: 192.168.103.2
	I1201 20:07:55.589951  343871 certs.go:195] generating shared ca certs ...
	I1201 20:07:55.589966  343871 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.590133  343871 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:07:55.590184  343871 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:07:55.590196  343871 certs.go:257] generating profile certs ...
	I1201 20:07:55.590261  343871 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.key
	I1201 20:07:55.590281  343871 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.crt with IP's: []
	I1201 20:07:55.720354  343871 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.crt ...
	I1201 20:07:55.720381  343871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.crt: {Name:mk3534163d936160446daade155159815f0a82ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.720544  343871 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.key ...
	I1201 20:07:55.720559  343871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.key: {Name:mkf9595ac97f87c1b0c1306a7e2c55a45fcf6771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.720642  343871 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key.6e926564
	I1201 20:07:55.720662  343871 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt.6e926564 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1201 20:07:55.814484  343871 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt.6e926564 ...
	I1201 20:07:55.814559  343871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt.6e926564: {Name:mke28397abe478ff9401c23a10947ad67439f4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.814751  343871 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key.6e926564 ...
	I1201 20:07:55.814775  343871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key.6e926564: {Name:mk6154355226d3266d30366a517b2f4bc80bc0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.814891  343871 certs.go:382] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt.6e926564 -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt
	I1201 20:07:55.814963  343871 certs.go:386] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key.6e926564 -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key
	I1201 20:07:55.815015  343871 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key
	I1201 20:07:55.815029  343871 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.crt with IP's: []
	I1201 20:07:55.896420  343871 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.crt ...
	I1201 20:07:55.896447  343871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.crt: {Name:mk76d91b7411e61bd0d00e522f7e37f278f501bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.896614  343871 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key ...
	I1201 20:07:55.896637  343871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key: {Name:mkbe4b2c3a65cb8729d8862f90c087d0dbb635d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:07:55.896880  343871 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:07:55.896940  343871 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:07:55.896956  343871 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:07:55.897003  343871 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:07:55.897039  343871 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:07:55.897075  343871 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:07:55.897150  343871 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:07:55.897864  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:07:55.916265  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:07:55.934643  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:07:55.951972  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:07:55.969101  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1201 20:07:55.987093  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:07:56.006501  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:07:56.024552  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:07:56.042507  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:07:56.061838  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:07:56.080447  343871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:07:56.098873  343871 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:07:56.111570  343871 ssh_runner.go:195] Run: openssl version
	I1201 20:07:56.117710  343871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:07:56.126280  343871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:07:56.130032  343871 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:07:56.130117  343871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:07:56.168091  343871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:07:56.176983  343871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:07:56.185768  343871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:07:56.189782  343871 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:07:56.189834  343871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:07:56.225687  343871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:07:56.235117  343871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:07:56.243874  343871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:56.247766  343871 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:56.247821  343871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:07:56.283510  343871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:07:56.292619  343871 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:07:56.296570  343871 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 20:07:56.296635  343871 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:07:56.296735  343871 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:07:56.296787  343871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:07:56.337314  343871 cri.go:89] found id: ""
	I1201 20:07:56.337391  343871 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:07:56.346840  343871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:07:56.355855  343871 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 20:07:56.355921  343871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:07:56.365679  343871 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 20:07:56.365699  343871 kubeadm.go:158] found existing configuration files:
	
	I1201 20:07:56.365746  343871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1201 20:07:56.374266  343871 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 20:07:56.374348  343871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 20:07:56.385029  343871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1201 20:07:56.395639  343871 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 20:07:56.395691  343871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:07:56.405820  343871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1201 20:07:56.415916  343871 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 20:07:56.416060  343871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:07:56.427393  343871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1201 20:07:56.438149  343871 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 20:07:56.438211  343871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:07:56.449334  343871 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 20:07:56.513858  343871 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1201 20:07:56.513956  343871 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 20:07:56.544320  343871 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 20:07:56.544415  343871 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1201 20:07:56.544469  343871 kubeadm.go:319] OS: Linux
	I1201 20:07:56.544634  343871 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 20:07:56.544726  343871 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 20:07:56.544806  343871 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 20:07:56.544883  343871 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 20:07:56.544959  343871 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 20:07:56.545026  343871 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 20:07:56.545110  343871 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 20:07:56.545174  343871 kubeadm.go:319] CGROUPS_IO: enabled
	I1201 20:07:56.642013  343871 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 20:07:56.642156  343871 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 20:07:56.642299  343871 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 20:07:56.653887  343871 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 20:07:56.655639  343871 out.go:252]   - Generating certificates and keys ...
	I1201 20:07:56.658055  343871 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 20:07:56.658360  343871 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 20:07:57.069239  343871 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 20:07:57.227835  343871 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 20:07:57.306854  343871 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 20:07:57.663805  343871 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 20:07:58.608348  343871 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 20:07:58.608583  343871 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-009682 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1201 20:07:58.769470  343871 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 20:07:58.769771  343871 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-009682 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1201 20:07:56.613662  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1201 20:07:56.613718  345040 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1201 20:07:56.613816  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:56.636129  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:56.651433  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:56.652045  345040 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:07:56.652069  345040 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:07:56.652122  345040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:07:56.679120  345040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:07:56.752013  345040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:07:56.769121  345040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:07:56.771814  345040 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-217464" to be "Ready" ...
	I1201 20:07:56.776135  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1201 20:07:56.776154  345040 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1201 20:07:56.796044  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1201 20:07:56.796071  345040 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1201 20:07:56.800968  345040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:07:56.813467  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1201 20:07:56.813488  345040 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1201 20:07:56.831426  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1201 20:07:56.831448  345040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1201 20:07:56.850338  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1201 20:07:56.850363  345040 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1201 20:07:56.866863  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1201 20:07:56.866901  345040 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1201 20:07:56.883338  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1201 20:07:56.883365  345040 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1201 20:07:56.900964  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1201 20:07:56.900988  345040 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1201 20:07:56.915120  345040 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:07:56.915144  345040 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1201 20:07:56.931193  345040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:07:59.300079  345040 node_ready.go:49] node "old-k8s-version-217464" is "Ready"
	I1201 20:07:59.300120  345040 node_ready.go:38] duration metric: took 2.528276304s for node "old-k8s-version-217464" to be "Ready" ...
	I1201 20:07:59.300135  345040 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:07:59.300183  345040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:07:59.969570  345040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.200410076s)
	I1201 20:07:59.969686  345040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.168690088s)
	I1201 20:08:00.413735  345040 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.113525969s)
	I1201 20:08:00.413778  345040 api_server.go:72] duration metric: took 3.837072549s to wait for apiserver process to appear ...
	I1201 20:08:00.413787  345040 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:08:00.413810  345040 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:08:00.414360  345040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.483125858s)
	I1201 20:08:00.415891  345040 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-217464 addons enable metrics-server
	
	I1201 20:08:00.417121  345040 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1201 20:07:57.328049  335220 node_ready.go:57] node "embed-certs-990820" has "Ready":"False" status (will retry)
	W1201 20:07:59.336673  335220 node_ready.go:57] node "embed-certs-990820" has "Ready":"False" status (will retry)
	I1201 20:07:59.827524  335220 node_ready.go:49] node "embed-certs-990820" is "Ready"
	I1201 20:07:59.827557  335220 node_ready.go:38] duration metric: took 11.002957123s for node "embed-certs-990820" to be "Ready" ...
	I1201 20:07:59.827573  335220 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:07:59.827626  335220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:07:59.842674  335220 api_server.go:72] duration metric: took 11.579465349s to wait for apiserver process to appear ...
	I1201 20:07:59.842722  335220 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:07:59.842743  335220 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1201 20:07:59.848160  335220 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1201 20:07:59.849215  335220 api_server.go:141] control plane version: v1.34.2
	I1201 20:07:59.849242  335220 api_server.go:131] duration metric: took 6.512507ms to wait for apiserver health ...
	I1201 20:07:59.849253  335220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:07:59.858555  335220 system_pods.go:59] 8 kube-system pods found
	I1201 20:07:59.858597  335220 system_pods.go:61] "coredns-66bc5c9577-qngk9" [82add6c5-30ec-4a10-8643-ebfd6e4446b2] Pending
	I1201 20:07:59.858612  335220 system_pods.go:61] "etcd-embed-certs-990820" [fdd30505-a46a-4ea9-885e-048a100a8b94] Running
	I1201 20:07:59.858618  335220 system_pods.go:61] "kindnet-cpmn4" [75697d60-89d9-474b-97e4-98e1de47830d] Running
	I1201 20:07:59.858624  335220 system_pods.go:61] "kube-apiserver-embed-certs-990820" [7b948468-e47f-440a-9aab-e16661e244bf] Running
	I1201 20:07:59.858630  335220 system_pods.go:61] "kube-controller-manager-embed-certs-990820" [7bf7e15b-63ae-453f-b0a4-e585067fc780] Running
	I1201 20:07:59.858635  335220 system_pods.go:61] "kube-proxy-t2nmz" [1c4f1726-e033-43cf-8bd5-4f09a8761f82] Running
	I1201 20:07:59.858640  335220 system_pods.go:61] "kube-scheduler-embed-certs-990820" [d72d2668-61c2-4d85-b0d3-bfa5c23b10ef] Running
	I1201 20:07:59.858653  335220 system_pods.go:61] "storage-provisioner" [979d22fd-6b50-45b1-9ef0-6e1a932db5c2] Pending
	I1201 20:07:59.858661  335220 system_pods.go:74] duration metric: took 9.400687ms to wait for pod list to return data ...
	I1201 20:07:59.858670  335220 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:07:59.861248  335220 default_sa.go:45] found service account: "default"
	I1201 20:07:59.861268  335220 default_sa.go:55] duration metric: took 2.591194ms for default service account to be created ...
	I1201 20:07:59.861299  335220 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:07:59.864087  335220 system_pods.go:86] 8 kube-system pods found
	I1201 20:07:59.864117  335220 system_pods.go:89] "coredns-66bc5c9577-qngk9" [82add6c5-30ec-4a10-8643-ebfd6e4446b2] Pending
	I1201 20:07:59.864123  335220 system_pods.go:89] "etcd-embed-certs-990820" [fdd30505-a46a-4ea9-885e-048a100a8b94] Running
	I1201 20:07:59.864128  335220 system_pods.go:89] "kindnet-cpmn4" [75697d60-89d9-474b-97e4-98e1de47830d] Running
	I1201 20:07:59.864134  335220 system_pods.go:89] "kube-apiserver-embed-certs-990820" [7b948468-e47f-440a-9aab-e16661e244bf] Running
	I1201 20:07:59.864139  335220 system_pods.go:89] "kube-controller-manager-embed-certs-990820" [7bf7e15b-63ae-453f-b0a4-e585067fc780] Running
	I1201 20:07:59.864144  335220 system_pods.go:89] "kube-proxy-t2nmz" [1c4f1726-e033-43cf-8bd5-4f09a8761f82] Running
	I1201 20:07:59.864149  335220 system_pods.go:89] "kube-scheduler-embed-certs-990820" [d72d2668-61c2-4d85-b0d3-bfa5c23b10ef] Running
	I1201 20:07:59.864158  335220 system_pods.go:89] "storage-provisioner" [979d22fd-6b50-45b1-9ef0-6e1a932db5c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:07:59.864181  335220 retry.go:31] will retry after 266.999269ms: missing components: kube-dns
	I1201 20:08:00.136986  335220 system_pods.go:86] 8 kube-system pods found
	I1201 20:08:00.137025  335220 system_pods.go:89] "coredns-66bc5c9577-qngk9" [82add6c5-30ec-4a10-8643-ebfd6e4446b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:08:00.137164  335220 system_pods.go:89] "etcd-embed-certs-990820" [fdd30505-a46a-4ea9-885e-048a100a8b94] Running
	I1201 20:08:00.137192  335220 system_pods.go:89] "kindnet-cpmn4" [75697d60-89d9-474b-97e4-98e1de47830d] Running
	I1201 20:08:00.137198  335220 system_pods.go:89] "kube-apiserver-embed-certs-990820" [7b948468-e47f-440a-9aab-e16661e244bf] Running
	I1201 20:08:00.137204  335220 system_pods.go:89] "kube-controller-manager-embed-certs-990820" [7bf7e15b-63ae-453f-b0a4-e585067fc780] Running
	I1201 20:08:00.137209  335220 system_pods.go:89] "kube-proxy-t2nmz" [1c4f1726-e033-43cf-8bd5-4f09a8761f82] Running
	I1201 20:08:00.137214  335220 system_pods.go:89] "kube-scheduler-embed-certs-990820" [d72d2668-61c2-4d85-b0d3-bfa5c23b10ef] Running
	I1201 20:08:00.137257  335220 system_pods.go:89] "storage-provisioner" [979d22fd-6b50-45b1-9ef0-6e1a932db5c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:08:00.137317  335220 retry.go:31] will retry after 247.893747ms: missing components: kube-dns
	I1201 20:08:00.391187  335220 system_pods.go:86] 8 kube-system pods found
	I1201 20:08:00.391228  335220 system_pods.go:89] "coredns-66bc5c9577-qngk9" [82add6c5-30ec-4a10-8643-ebfd6e4446b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:08:00.391237  335220 system_pods.go:89] "etcd-embed-certs-990820" [fdd30505-a46a-4ea9-885e-048a100a8b94] Running
	I1201 20:08:00.391245  335220 system_pods.go:89] "kindnet-cpmn4" [75697d60-89d9-474b-97e4-98e1de47830d] Running
	I1201 20:08:00.391250  335220 system_pods.go:89] "kube-apiserver-embed-certs-990820" [7b948468-e47f-440a-9aab-e16661e244bf] Running
	I1201 20:08:00.391256  335220 system_pods.go:89] "kube-controller-manager-embed-certs-990820" [7bf7e15b-63ae-453f-b0a4-e585067fc780] Running
	I1201 20:08:00.391261  335220 system_pods.go:89] "kube-proxy-t2nmz" [1c4f1726-e033-43cf-8bd5-4f09a8761f82] Running
	I1201 20:08:00.391266  335220 system_pods.go:89] "kube-scheduler-embed-certs-990820" [d72d2668-61c2-4d85-b0d3-bfa5c23b10ef] Running
	I1201 20:08:00.391281  335220 system_pods.go:89] "storage-provisioner" [979d22fd-6b50-45b1-9ef0-6e1a932db5c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:08:00.391330  335220 retry.go:31] will retry after 301.055672ms: missing components: kube-dns
	I1201 20:08:00.696796  335220 system_pods.go:86] 8 kube-system pods found
	I1201 20:08:00.696829  335220 system_pods.go:89] "coredns-66bc5c9577-qngk9" [82add6c5-30ec-4a10-8643-ebfd6e4446b2] Running
	I1201 20:08:00.696839  335220 system_pods.go:89] "etcd-embed-certs-990820" [fdd30505-a46a-4ea9-885e-048a100a8b94] Running
	I1201 20:08:00.696844  335220 system_pods.go:89] "kindnet-cpmn4" [75697d60-89d9-474b-97e4-98e1de47830d] Running
	I1201 20:08:00.696848  335220 system_pods.go:89] "kube-apiserver-embed-certs-990820" [7b948468-e47f-440a-9aab-e16661e244bf] Running
	I1201 20:08:00.696852  335220 system_pods.go:89] "kube-controller-manager-embed-certs-990820" [7bf7e15b-63ae-453f-b0a4-e585067fc780] Running
	I1201 20:08:00.696860  335220 system_pods.go:89] "kube-proxy-t2nmz" [1c4f1726-e033-43cf-8bd5-4f09a8761f82] Running
	I1201 20:08:00.696865  335220 system_pods.go:89] "kube-scheduler-embed-certs-990820" [d72d2668-61c2-4d85-b0d3-bfa5c23b10ef] Running
	I1201 20:08:00.696877  335220 system_pods.go:89] "storage-provisioner" [979d22fd-6b50-45b1-9ef0-6e1a932db5c2] Running
	I1201 20:08:00.696891  335220 system_pods.go:126] duration metric: took 835.584769ms to wait for k8s-apps to be running ...
	I1201 20:08:00.696904  335220 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:08:00.696963  335220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:08:00.713856  335220 system_svc.go:56] duration metric: took 16.944638ms WaitForService to wait for kubelet
	I1201 20:08:00.713886  335220 kubeadm.go:587] duration metric: took 12.450683505s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:08:00.713903  335220 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:08:00.717265  335220 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:08:00.717302  335220 node_conditions.go:123] node cpu capacity is 8
	I1201 20:08:00.717331  335220 node_conditions.go:105] duration metric: took 3.422604ms to run NodePressure ...
	I1201 20:08:00.717349  335220 start.go:242] waiting for startup goroutines ...
	I1201 20:08:00.717364  335220 start.go:247] waiting for cluster config update ...
	I1201 20:08:00.717384  335220 start.go:256] writing updated cluster config ...
	I1201 20:08:00.717686  335220 ssh_runner.go:195] Run: rm -f paused
	I1201 20:08:00.722388  335220 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:08:00.726852  335220 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qngk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:00.731590  335220 pod_ready.go:94] pod "coredns-66bc5c9577-qngk9" is "Ready"
	I1201 20:08:00.731618  335220 pod_ready.go:86] duration metric: took 4.740977ms for pod "coredns-66bc5c9577-qngk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:00.733949  335220 pod_ready.go:83] waiting for pod "etcd-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:00.737975  335220 pod_ready.go:94] pod "etcd-embed-certs-990820" is "Ready"
	I1201 20:08:00.737994  335220 pod_ready.go:86] duration metric: took 4.028456ms for pod "etcd-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:00.740093  335220 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:00.744125  335220 pod_ready.go:94] pod "kube-apiserver-embed-certs-990820" is "Ready"
	I1201 20:08:00.744146  335220 pod_ready.go:86] duration metric: took 4.030529ms for pod "kube-apiserver-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:00.746129  335220 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:01.127712  335220 pod_ready.go:94] pod "kube-controller-manager-embed-certs-990820" is "Ready"
	I1201 20:08:01.127738  335220 pod_ready.go:86] duration metric: took 381.589862ms for pod "kube-controller-manager-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:01.328608  335220 pod_ready.go:83] waiting for pod "kube-proxy-t2nmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:01.727447  335220 pod_ready.go:94] pod "kube-proxy-t2nmz" is "Ready"
	I1201 20:08:01.727478  335220 pod_ready.go:86] duration metric: took 398.83962ms for pod "kube-proxy-t2nmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:01.928282  335220 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:02.327484  335220 pod_ready.go:94] pod "kube-scheduler-embed-certs-990820" is "Ready"
	I1201 20:08:02.327516  335220 pod_ready.go:86] duration metric: took 399.170566ms for pod "kube-scheduler-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:02.327531  335220 pod_ready.go:40] duration metric: took 1.605112571s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:08:02.382964  335220 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 20:08:02.384506  335220 out.go:179] * Done! kubectl is now configured to use "embed-certs-990820" cluster and "default" namespace by default
	I1201 20:07:59.430164  343871 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 20:07:59.520223  343871 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 20:07:59.896576  343871 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 20:07:59.896966  343871 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 20:08:00.230736  343871 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 20:08:00.896662  343871 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 20:08:01.163492  343871 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 20:08:01.981308  343871 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 20:08:02.563434  343871 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 20:08:02.563921  343871 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 20:08:02.567393  343871 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 20:08:02.568680  343871 out.go:252]   - Booting up control plane ...
	I1201 20:08:02.568773  343871 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 20:08:02.568883  343871 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 20:08:02.569647  343871 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 20:08:02.584477  343871 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 20:08:02.584618  343871 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 20:08:02.591385  343871 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 20:08:02.591666  343871 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 20:08:02.591708  343871 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 20:08:02.715327  343871 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 20:08:02.715499  343871 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 20:08:03.716256  343871 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000960041s
	I1201 20:08:03.719502  343871 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 20:08:03.719680  343871 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1201 20:08:03.719818  343871 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 20:08:03.719950  343871 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 20:08:00.418123  345040 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1201 20:08:00.418149  345040 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1201 20:08:00.418509  345040 addons.go:530] duration metric: took 3.841477801s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1201 20:08:00.914534  345040 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:08:00.920440  345040 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1201 20:08:00.921859  345040 api_server.go:141] control plane version: v1.28.0
	I1201 20:08:00.921882  345040 api_server.go:131] duration metric: took 508.087598ms to wait for apiserver health ...
	I1201 20:08:00.921892  345040 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:08:00.926788  345040 system_pods.go:59] 8 kube-system pods found
	I1201 20:08:00.926837  345040 system_pods.go:61] "coredns-5dd5756b68-jpv6h" [06a54ff5-5ae8-4a69-898c-003502faf17d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:08:00.926850  345040 system_pods.go:61] "etcd-old-k8s-version-217464" [a0a8c1a1-7051-42fc-a621-1e586492bde9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:08:00.926859  345040 system_pods.go:61] "kindnet-x9tkl" [baa3c072-c4e8-4d7c-ad9f-7ee7461ea900] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1201 20:08:00.926870  345040 system_pods.go:61] "kube-apiserver-old-k8s-version-217464" [1361b908-903f-4cd0-bd52-9c9e8004cb10] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:08:00.926880  345040 system_pods.go:61] "kube-controller-manager-old-k8s-version-217464" [e2b817ef-867c-4e61-ae69-800d62199a5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:08:00.926889  345040 system_pods.go:61] "kube-proxy-fjhhh" [12564231-f1d8-4991-b32e-478ee1e61837] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1201 20:08:00.926896  345040 system_pods.go:61] "kube-scheduler-old-k8s-version-217464" [99bc6598-dbf3-4de3-9faf-b1a467c96d2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:08:00.926903  345040 system_pods.go:61] "storage-provisioner" [dd6ba6d5-6040-4b65-81b7-b77a7f52ccc2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:08:00.926910  345040 system_pods.go:74] duration metric: took 5.011277ms to wait for pod list to return data ...
	I1201 20:08:00.926918  345040 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:08:00.929267  345040 default_sa.go:45] found service account: "default"
	I1201 20:08:00.929324  345040 default_sa.go:55] duration metric: took 2.362738ms for default service account to be created ...
	I1201 20:08:00.929337  345040 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:08:00.932693  345040 system_pods.go:86] 8 kube-system pods found
	I1201 20:08:00.932718  345040 system_pods.go:89] "coredns-5dd5756b68-jpv6h" [06a54ff5-5ae8-4a69-898c-003502faf17d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:08:00.932731  345040 system_pods.go:89] "etcd-old-k8s-version-217464" [a0a8c1a1-7051-42fc-a621-1e586492bde9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:08:00.932744  345040 system_pods.go:89] "kindnet-x9tkl" [baa3c072-c4e8-4d7c-ad9f-7ee7461ea900] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1201 20:08:00.932756  345040 system_pods.go:89] "kube-apiserver-old-k8s-version-217464" [1361b908-903f-4cd0-bd52-9c9e8004cb10] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:08:00.932768  345040 system_pods.go:89] "kube-controller-manager-old-k8s-version-217464" [e2b817ef-867c-4e61-ae69-800d62199a5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:08:00.932782  345040 system_pods.go:89] "kube-proxy-fjhhh" [12564231-f1d8-4991-b32e-478ee1e61837] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1201 20:08:00.932790  345040 system_pods.go:89] "kube-scheduler-old-k8s-version-217464" [99bc6598-dbf3-4de3-9faf-b1a467c96d2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:08:00.932801  345040 system_pods.go:89] "storage-provisioner" [dd6ba6d5-6040-4b65-81b7-b77a7f52ccc2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:08:00.932813  345040 system_pods.go:126] duration metric: took 3.467931ms to wait for k8s-apps to be running ...
	I1201 20:08:00.932837  345040 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:08:00.932886  345040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:08:00.949852  345040 system_svc.go:56] duration metric: took 17.011305ms WaitForService to wait for kubelet
	I1201 20:08:00.949877  345040 kubeadm.go:587] duration metric: took 4.373170894s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:08:00.949899  345040 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:08:00.952354  345040 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:08:00.952381  345040 node_conditions.go:123] node cpu capacity is 8
	I1201 20:08:00.952396  345040 node_conditions.go:105] duration metric: took 2.492549ms to run NodePressure ...
	I1201 20:08:00.952408  345040 start.go:242] waiting for startup goroutines ...
	I1201 20:08:00.952418  345040 start.go:247] waiting for cluster config update ...
	I1201 20:08:00.952436  345040 start.go:256] writing updated cluster config ...
	I1201 20:08:00.952709  345040 ssh_runner.go:195] Run: rm -f paused
	I1201 20:08:00.958240  345040 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:08:00.964174  345040 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-jpv6h" in "kube-system" namespace to be "Ready" or be gone ...
	W1201 20:08:02.969217  345040 pod_ready.go:104] pod "coredns-5dd5756b68-jpv6h" is not "Ready", error: <nil>
	I1201 20:08:04.880547  343871 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.160922905s
	I1201 20:08:05.582093  343871 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.862539182s
	I1201 20:08:07.221676  343871 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502073156s
	I1201 20:08:07.236822  343871 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1201 20:08:07.247222  343871 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1201 20:08:07.256518  343871 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1201 20:08:07.256804  343871 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-009682 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1201 20:08:07.264224  343871 kubeadm.go:319] [bootstrap-token] Using token: 867l55.min0f53uqarefhs7
	I1201 20:08:07.265551  343871 out.go:252]   - Configuring RBAC rules ...
	I1201 20:08:07.265708  343871 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1201 20:08:07.269662  343871 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1201 20:08:07.274688  343871 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1201 20:08:07.277389  343871 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1201 20:08:07.280173  343871 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1201 20:08:07.282495  343871 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1201 20:08:07.626920  343871 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1201 20:08:08.041449  343871 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1201 20:08:08.627431  343871 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1201 20:08:08.628229  343871 kubeadm.go:319] 
	I1201 20:08:08.628347  343871 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1201 20:08:08.628371  343871 kubeadm.go:319] 
	I1201 20:08:08.628437  343871 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1201 20:08:08.628443  343871 kubeadm.go:319] 
	I1201 20:08:08.628464  343871 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1201 20:08:08.628535  343871 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1201 20:08:08.628583  343871 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1201 20:08:08.628590  343871 kubeadm.go:319] 
	I1201 20:08:08.628635  343871 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1201 20:08:08.628641  343871 kubeadm.go:319] 
	I1201 20:08:08.628682  343871 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1201 20:08:08.628687  343871 kubeadm.go:319] 
	I1201 20:08:08.628731  343871 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1201 20:08:08.628835  343871 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1201 20:08:08.628955  343871 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1201 20:08:08.628970  343871 kubeadm.go:319] 
	I1201 20:08:08.629068  343871 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1201 20:08:08.629189  343871 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1201 20:08:08.629202  343871 kubeadm.go:319] 
	I1201 20:08:08.629339  343871 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 867l55.min0f53uqarefhs7 \
	I1201 20:08:08.629509  343871 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:06dc444a5ab7086c8ab7763a4467d2ea9bcadb3b2da24d1940c0cca14b3cdc8a \
	I1201 20:08:08.629540  343871 kubeadm.go:319] 	--control-plane 
	I1201 20:08:08.629551  343871 kubeadm.go:319] 
	I1201 20:08:08.629665  343871 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1201 20:08:08.629674  343871 kubeadm.go:319] 
	I1201 20:08:08.629790  343871 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 867l55.min0f53uqarefhs7 \
	I1201 20:08:08.629941  343871 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:06dc444a5ab7086c8ab7763a4467d2ea9bcadb3b2da24d1940c0cca14b3cdc8a 
	I1201 20:08:08.632300  343871 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1201 20:08:08.632458  343871 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 20:08:08.632496  343871 cni.go:84] Creating CNI manager for ""
	I1201 20:08:08.632508  343871 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:08.634711  343871 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1201 20:08:08.635740  343871 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1201 20:08:08.640090  343871 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1201 20:08:08.640115  343871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1201 20:08:08.653236  343871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1201 20:08:08.862083  343871 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1201 20:08:08.862177  343871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:08:08.862176  343871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-009682 minikube.k8s.io/updated_at=2025_12_01T20_08_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9 minikube.k8s.io/name=default-k8s-diff-port-009682 minikube.k8s.io/primary=true
	I1201 20:08:08.872019  343871 ops.go:34] apiserver oom_adj: -16
	I1201 20:08:08.949828  343871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1201 20:08:04.971199  345040 pod_ready.go:104] pod "coredns-5dd5756b68-jpv6h" is not "Ready", error: <nil>
	W1201 20:08:07.469703  345040 pod_ready.go:104] pod "coredns-5dd5756b68-jpv6h" is not "Ready", error: <nil>
	I1201 20:08:08.469760  345040 pod_ready.go:94] pod "coredns-5dd5756b68-jpv6h" is "Ready"
	I1201 20:08:08.469782  345040 pod_ready.go:86] duration metric: took 7.505585029s for pod "coredns-5dd5756b68-jpv6h" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:08.472725  345040 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-217464" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:08.476683  345040 pod_ready.go:94] pod "etcd-old-k8s-version-217464" is "Ready"
	I1201 20:08:08.476703  345040 pod_ready.go:86] duration metric: took 3.959221ms for pod "etcd-old-k8s-version-217464" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:08.479589  345040 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-217464" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:08.483475  345040 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-217464" is "Ready"
	I1201 20:08:08.483497  345040 pod_ready.go:86] duration metric: took 3.889434ms for pod "kube-apiserver-old-k8s-version-217464" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:08:08.486133  345040 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-217464" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Dec 01 20:08:00 embed-certs-990820 crio[775]: time="2025-12-01T20:08:00.227537419Z" level=info msg="Starting container: 8a283f97ce398e062d5824334de51338bc12240ebff4f45bf8e89247a70d0d11" id=3d0fb969-6d6c-4010-94ab-d5be837cbb8c name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:08:00 embed-certs-990820 crio[775]: time="2025-12-01T20:08:00.230424651Z" level=info msg="Started container" PID=1851 containerID=8a283f97ce398e062d5824334de51338bc12240ebff4f45bf8e89247a70d0d11 description=kube-system/coredns-66bc5c9577-qngk9/coredns id=3d0fb969-6d6c-4010-94ab-d5be837cbb8c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ba2250bb112683c855e700f1d65e17944c57cb62f44a4cd2bd776aec112bedd8
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.854867664Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2375f26f-f600-4599-8303-0b24871f18f1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.854943188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.861868136Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4d873a27b9a03b5a93d43a702666780e42af43f8dc14018e11bd4e3e2f61b148 UID:35055282-a717-479f-9f0b-454c174c024e NetNS:/var/run/netns/539590b9-ce1f-461b-969b-34b3dbcdb716 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132e00}] Aliases:map[]}"
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.861900982Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.872134678Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4d873a27b9a03b5a93d43a702666780e42af43f8dc14018e11bd4e3e2f61b148 UID:35055282-a717-479f-9f0b-454c174c024e NetNS:/var/run/netns/539590b9-ce1f-461b-969b-34b3dbcdb716 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132e00}] Aliases:map[]}"
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.872355271Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.873381676Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.874406211Z" level=info msg="Ran pod sandbox 4d873a27b9a03b5a93d43a702666780e42af43f8dc14018e11bd4e3e2f61b148 with infra container: default/busybox/POD" id=2375f26f-f600-4599-8303-0b24871f18f1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.876886965Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5d181b70-5faf-4651-9488-e857beb3f3a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.877678194Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5d181b70-5faf-4651-9488-e857beb3f3a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.87774832Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5d181b70-5faf-4651-9488-e857beb3f3a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.878613876Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7251ca7a-8bff-43c5-8419-3c3e87d2dcd6 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:08:02 embed-certs-990820 crio[775]: time="2025-12-01T20:08:02.880349889Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 01 20:08:04 embed-certs-990820 crio[775]: time="2025-12-01T20:08:04.177046839Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=7251ca7a-8bff-43c5-8419-3c3e87d2dcd6 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:08:04 embed-certs-990820 crio[775]: time="2025-12-01T20:08:04.178080078Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b63c839e-6fab-46a5-a6c3-6ea93e29bdd4 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:04 embed-certs-990820 crio[775]: time="2025-12-01T20:08:04.179853496Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f4027cde-0a51-4004-94fd-b19ed9b0687f name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:04 embed-certs-990820 crio[775]: time="2025-12-01T20:08:04.183443615Z" level=info msg="Creating container: default/busybox/busybox" id=dc09531f-6205-4e42-a419-8f12d940cf3e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:04 embed-certs-990820 crio[775]: time="2025-12-01T20:08:04.183574776Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:04 embed-certs-990820 crio[775]: time="2025-12-01T20:08:04.188949973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:04 embed-certs-990820 crio[775]: time="2025-12-01T20:08:04.189520066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:04 embed-certs-990820 crio[775]: time="2025-12-01T20:08:04.213851463Z" level=info msg="Created container f45eeebe86914c841da3d0df6588283f7aae2207c86bc95de991658a6a448c03: default/busybox/busybox" id=dc09531f-6205-4e42-a419-8f12d940cf3e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:04 embed-certs-990820 crio[775]: time="2025-12-01T20:08:04.214749986Z" level=info msg="Starting container: f45eeebe86914c841da3d0df6588283f7aae2207c86bc95de991658a6a448c03" id=bc467bfa-b3bb-4c88-9c4e-f4bc0e51ba21 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:08:04 embed-certs-990820 crio[775]: time="2025-12-01T20:08:04.217178858Z" level=info msg="Started container" PID=1925 containerID=f45eeebe86914c841da3d0df6588283f7aae2207c86bc95de991658a6a448c03 description=default/busybox/busybox id=bc467bfa-b3bb-4c88-9c4e-f4bc0e51ba21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4d873a27b9a03b5a93d43a702666780e42af43f8dc14018e11bd4e3e2f61b148
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	f45eeebe86914       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   4d873a27b9a03       busybox                                      default
	8a283f97ce398       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 seconds ago      Running             coredns                   0                   ba2250bb11268       coredns-66bc5c9577-qngk9                     kube-system
	c42bc094ad6de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   b2a3546bdd48b       storage-provisioner                          kube-system
	cf01cef56a375       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      21 seconds ago      Running             kindnet-cni               0                   ae729b8ef2dad       kindnet-cpmn4                                kube-system
	e448396041e81       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      22 seconds ago      Running             kube-proxy                0                   b69f31f41aeba       kube-proxy-t2nmz                             kube-system
	40ecccfc6849d       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      32 seconds ago      Running             kube-apiserver            0                   7acacaf5f76af       kube-apiserver-embed-certs-990820            kube-system
	f564b815d8ff8       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      32 seconds ago      Running             kube-scheduler            0                   6d61ce3b08431       kube-scheduler-embed-certs-990820            kube-system
	3f338098afbe7       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      32 seconds ago      Running             etcd                      0                   e1da779cd4d59       etcd-embed-certs-990820                      kube-system
	146173a02ae8e       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      32 seconds ago      Running             kube-controller-manager   0                   cb3e6c8601807       kube-controller-manager-embed-certs-990820   kube-system
	
	
	==> coredns [8a283f97ce398e062d5824334de51338bc12240ebff4f45bf8e89247a70d0d11] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51634 - 43720 "HINFO IN 761956568051751977.4785205011809823412. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.050878141s
	
	
	==> describe nodes <==
	Name:               embed-certs-990820
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-990820
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=embed-certs-990820
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_07_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:07:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-990820
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:08:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:07:59 +0000   Mon, 01 Dec 2025 20:07:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:07:59 +0000   Mon, 01 Dec 2025 20:07:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:07:59 +0000   Mon, 01 Dec 2025 20:07:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:07:59 +0000   Mon, 01 Dec 2025 20:07:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-990820
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                a8c77f5a-6866-4f6d-8e46-091d133c30f0
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-qngk9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-990820                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-cpmn4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-embed-certs-990820             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-990820    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-t2nmz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-embed-certs-990820             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node embed-certs-990820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node embed-certs-990820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node embed-certs-990820 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node embed-certs-990820 event: Registered Node embed-certs-990820 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-990820 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [3f338098afbe734517a7fbae6bf730eed8018b701b0241df379b9b39a0ec51d1] <==
	{"level":"info","ts":"2025-12-01T20:07:48.042561Z","caller":"traceutil/trace.go:172","msg":"trace[958812437] transaction","detail":"{read_only:false; response_revision:311; number_of_response:1; }","duration":"169.014057ms","start":"2025-12-01T20:07:47.873515Z","end":"2025-12-01T20:07:48.042529Z","steps":["trace[958812437] 'process raft request'  (duration: 100.110037ms)","trace[958812437] 'compare'  (duration: 68.795914ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-01T20:07:48.069062Z","caller":"traceutil/trace.go:172","msg":"trace[1982073255] transaction","detail":"{read_only:false; response_revision:313; number_of_response:1; }","duration":"128.133909ms","start":"2025-12-01T20:07:47.940909Z","end":"2025-12-01T20:07:48.069043Z","steps":["trace[1982073255] 'process raft request'  (duration: 128.060295ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.069081Z","caller":"traceutil/trace.go:172","msg":"trace[923305161] transaction","detail":"{read_only:false; response_revision:312; number_of_response:1; }","duration":"128.495145ms","start":"2025-12-01T20:07:47.940565Z","end":"2025-12-01T20:07:48.069060Z","steps":["trace[923305161] 'process raft request'  (duration: 128.297746ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.069166Z","caller":"traceutil/trace.go:172","msg":"trace[860048334] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"128.243923ms","start":"2025-12-01T20:07:47.940909Z","end":"2025-12-01T20:07:48.069153Z","steps":["trace[860048334] 'process raft request'  (duration: 128.10295ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-01T20:07:48.242049Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.534992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-12-01T20:07:48.242118Z","caller":"traceutil/trace.go:172","msg":"trace[1384370147] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:315; }","duration":"146.623417ms","start":"2025-12-01T20:07:48.095478Z","end":"2025-12-01T20:07:48.242101Z","steps":["trace[1384370147] 'agreement among raft nodes before linearized reading'  (duration: 78.576281ms)","trace[1384370147] 'range keys from in-memory index tree'  (duration: 67.859558ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-01T20:07:48.242155Z","caller":"traceutil/trace.go:172","msg":"trace[1039096282] transaction","detail":"{read_only:false; response_revision:316; number_of_response:1; }","duration":"167.4222ms","start":"2025-12-01T20:07:48.074718Z","end":"2025-12-01T20:07:48.242140Z","steps":["trace[1039096282] 'process raft request'  (duration: 99.396713ms)","trace[1039096282] 'compare'  (duration: 67.819383ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-01T20:07:48.242410Z","caller":"traceutil/trace.go:172","msg":"trace[1502860994] transaction","detail":"{read_only:false; response_revision:318; number_of_response:1; }","duration":"167.613212ms","start":"2025-12-01T20:07:48.074786Z","end":"2025-12-01T20:07:48.242399Z","steps":["trace[1502860994] 'process raft request'  (duration: 167.459523ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.242428Z","caller":"traceutil/trace.go:172","msg":"trace[469049251] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"134.840187ms","start":"2025-12-01T20:07:48.107581Z","end":"2025-12-01T20:07:48.242421Z","steps":["trace[469049251] 'process raft request'  (duration: 134.803301ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.242424Z","caller":"traceutil/trace.go:172","msg":"trace[1721893291] transaction","detail":"{read_only:false; response_revision:317; number_of_response:1; }","duration":"167.666756ms","start":"2025-12-01T20:07:48.074737Z","end":"2025-12-01T20:07:48.242404Z","steps":["trace[1721893291] 'process raft request'  (duration: 167.29397ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-01T20:07:48.242643Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.069534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-12-01T20:07:48.242677Z","caller":"traceutil/trace.go:172","msg":"trace[2140558267] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:320; }","duration":"135.112419ms","start":"2025-12-01T20:07:48.107555Z","end":"2025-12-01T20:07:48.242668Z","steps":["trace[2140558267] 'agreement among raft nodes before linearized reading'  (duration: 134.997222ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.242708Z","caller":"traceutil/trace.go:172","msg":"trace[1430598415] transaction","detail":"{read_only:false; response_revision:319; number_of_response:1; }","duration":"166.447193ms","start":"2025-12-01T20:07:48.076251Z","end":"2025-12-01T20:07:48.242698Z","steps":["trace[1430598415] 'process raft request'  (duration: 166.087971ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.462332Z","caller":"traceutil/trace.go:172","msg":"trace[1253579804] transaction","detail":"{read_only:false; response_revision:326; number_of_response:1; }","duration":"195.481154ms","start":"2025-12-01T20:07:48.266801Z","end":"2025-12-01T20:07:48.462282Z","steps":["trace[1253579804] 'process raft request'  (duration: 111.409668ms)","trace[1253579804] 'compare'  (duration: 83.949508ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-01T20:07:48.462343Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.180579ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-12-01T20:07:48.463001Z","caller":"traceutil/trace.go:172","msg":"trace[1669073484] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:325; }","duration":"123.853708ms","start":"2025-12-01T20:07:48.339128Z","end":"2025-12-01T20:07:48.462981Z","steps":["trace[1669073484] 'agreement among raft nodes before linearized reading'  (duration: 39.03528ms)","trace[1669073484] 'range keys from in-memory index tree'  (duration: 83.995474ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-01T20:07:48.500259Z","caller":"traceutil/trace.go:172","msg":"trace[1059492600] transaction","detail":"{read_only:false; response_revision:328; number_of_response:1; }","duration":"186.374184ms","start":"2025-12-01T20:07:48.313863Z","end":"2025-12-01T20:07:48.500238Z","steps":["trace[1059492600] 'process raft request'  (duration: 186.116529ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.500560Z","caller":"traceutil/trace.go:172","msg":"trace[1951561670] transaction","detail":"{read_only:false; response_revision:327; number_of_response:1; }","duration":"186.758318ms","start":"2025-12-01T20:07:48.313780Z","end":"2025-12-01T20:07:48.500538Z","steps":["trace[1951561670] 'process raft request'  (duration: 186.045322ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.500607Z","caller":"traceutil/trace.go:172","msg":"trace[638113661] transaction","detail":"{read_only:false; response_revision:330; number_of_response:1; }","duration":"180.339435ms","start":"2025-12-01T20:07:48.320250Z","end":"2025-12-01T20:07:48.500589Z","steps":["trace[638113661] 'process raft request'  (duration: 179.84628ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.500446Z","caller":"traceutil/trace.go:172","msg":"trace[521132560] transaction","detail":"{read_only:false; response_revision:329; number_of_response:1; }","duration":"180.253723ms","start":"2025-12-01T20:07:48.320167Z","end":"2025-12-01T20:07:48.500421Z","steps":["trace[521132560] 'process raft request'  (duration: 179.887454ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.500639Z","caller":"traceutil/trace.go:172","msg":"trace[1572017395] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"179.971494ms","start":"2025-12-01T20:07:48.320660Z","end":"2025-12-01T20:07:48.500632Z","steps":["trace[1572017395] 'process raft request'  (duration: 179.857616ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.500679Z","caller":"traceutil/trace.go:172","msg":"trace[2113772229] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"175.11411ms","start":"2025-12-01T20:07:48.325553Z","end":"2025-12-01T20:07:48.500667Z","steps":["trace[2113772229] 'process raft request'  (duration: 175.074543ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.500667Z","caller":"traceutil/trace.go:172","msg":"trace[455504135] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"179.966725ms","start":"2025-12-01T20:07:48.320695Z","end":"2025-12-01T20:07:48.500661Z","steps":["trace[455504135] 'process raft request'  (duration: 179.873138ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.500711Z","caller":"traceutil/trace.go:172","msg":"trace[1996240022] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"175.772014ms","start":"2025-12-01T20:07:48.324933Z","end":"2025-12-01T20:07:48.500705Z","steps":["trace[1996240022] 'process raft request'  (duration: 175.663771ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T20:07:48.500845Z","caller":"traceutil/trace.go:172","msg":"trace[1388787367] transaction","detail":"{read_only:false; response_revision:331; number_of_response:1; }","duration":"180.227199ms","start":"2025-12-01T20:07:48.320606Z","end":"2025-12-01T20:07:48.500833Z","steps":["trace[1388787367] 'process raft request'  (duration: 179.571505ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:08:11 up  1:50,  0 user,  load average: 3.61, 3.15, 2.24
	Linux embed-certs-990820 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cf01cef56a375dc0121296f9117f0232363f42eabc49ea71e9ad42f97679d4cd] <==
	I1201 20:07:49.226871       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:07:49.227402       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1201 20:07:49.227601       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:07:49.227627       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:07:49.227651       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:07:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:07:49.527255       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:07:49.527299       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:07:49.527312       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:07:49.527584       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:07:49.927483       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:07:49.927522       1 metrics.go:72] Registering metrics
	I1201 20:07:49.927596       1 controller.go:711] "Syncing nftables rules"
	I1201 20:07:59.527624       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:07:59.527668       1 main.go:301] handling current node
	I1201 20:08:09.527449       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:08:09.527484       1 main.go:301] handling current node
	
	
	==> kube-apiserver [40ecccfc6849d0eb31af64de180c8e2c897ebd13d0f7dc6a0073646f94dd1950] <==
	I1201 20:07:39.842925       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:07:39.842932       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:07:39.845339       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:07:39.845403       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1201 20:07:39.850516       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:07:39.852022       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1201 20:07:39.852563       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:07:40.740312       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1201 20:07:40.744152       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1201 20:07:40.744168       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:07:41.202743       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:07:41.239878       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:07:41.348355       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1201 20:07:41.354867       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1201 20:07:41.356083       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:07:41.361750       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:07:41.896102       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:07:42.554573       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:07:42.563325       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1201 20:07:42.569249       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1201 20:07:47.727011       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:07:47.805776       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:07:47.860431       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:07:48.074208       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1201 20:08:09.636982       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:35104: use of closed network connection
	
	
	==> kube-controller-manager [146173a02ae8eeba524f28815411257861748609732f3b9c4c70cb93b9d114a0] <==
	I1201 20:07:46.895569       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1201 20:07:46.895692       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-990820"
	I1201 20:07:46.895723       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1201 20:07:46.895753       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1201 20:07:46.896234       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1201 20:07:46.896238       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1201 20:07:46.896337       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1201 20:07:46.896387       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1201 20:07:46.896421       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1201 20:07:46.896533       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1201 20:07:46.897885       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1201 20:07:46.897888       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1201 20:07:46.898556       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1201 20:07:46.899417       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1201 20:07:46.901643       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1201 20:07:46.904899       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1201 20:07:46.906040       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:07:46.911278       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1201 20:07:46.916481       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1201 20:07:46.916541       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1201 20:07:46.917743       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1201 20:07:46.917768       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1201 20:07:46.920164       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:07:46.926327       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1201 20:08:01.896778       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e448396041e81c2b5f7d6ba547b1f57ed0ed6147c6582f37551e6f7578dca99c] <==
	I1201 20:07:49.064559       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:07:49.123618       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 20:07:49.224151       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 20:07:49.224195       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1201 20:07:49.224430       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:07:49.245972       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:07:49.246043       1 server_linux.go:132] "Using iptables Proxier"
	I1201 20:07:49.252069       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:07:49.252581       1 server.go:527] "Version info" version="v1.34.2"
	I1201 20:07:49.252616       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:07:49.256232       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:07:49.256335       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:07:49.256388       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:07:49.256411       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:07:49.256427       1 config.go:200] "Starting service config controller"
	I1201 20:07:49.256483       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:07:49.256736       1 config.go:309] "Starting node config controller"
	I1201 20:07:49.256746       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:07:49.256753       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:07:49.356557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:07:49.356593       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 20:07:49.356610       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f564b815d8ff8d2d5dc69cbe6b6af1988054f729d573bb2742a054ec8b589550] <==
	E1201 20:07:39.987248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 20:07:39.987463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 20:07:39.987489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 20:07:39.987944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 20:07:39.988020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 20:07:39.988071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 20:07:39.988123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 20:07:39.988140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1201 20:07:39.988199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 20:07:39.988255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 20:07:39.988262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 20:07:39.988390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1201 20:07:39.987957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 20:07:39.988437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 20:07:39.988623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 20:07:39.988660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 20:07:40.802198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 20:07:40.816242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 20:07:40.853366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1201 20:07:40.908934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 20:07:40.921199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 20:07:40.921260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 20:07:40.986702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 20:07:40.994861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1201 20:07:42.683657       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 20:07:43 embed-certs-990820 kubelet[1317]: E1201 20:07:43.416771    1317 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-990820\" already exists" pod="kube-system/kube-apiserver-embed-certs-990820"
	Dec 01 20:07:43 embed-certs-990820 kubelet[1317]: I1201 20:07:43.427741    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-990820" podStartSLOduration=1.427724588 podStartE2EDuration="1.427724588s" podCreationTimestamp="2025-12-01 20:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:07:43.427671464 +0000 UTC m=+1.122139070" watchObservedRunningTime="2025-12-01 20:07:43.427724588 +0000 UTC m=+1.122192192"
	Dec 01 20:07:43 embed-certs-990820 kubelet[1317]: I1201 20:07:43.437622    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-990820" podStartSLOduration=1.437602577 podStartE2EDuration="1.437602577s" podCreationTimestamp="2025-12-01 20:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:07:43.437554045 +0000 UTC m=+1.132021649" watchObservedRunningTime="2025-12-01 20:07:43.437602577 +0000 UTC m=+1.132070181"
	Dec 01 20:07:43 embed-certs-990820 kubelet[1317]: I1201 20:07:43.449333    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-990820" podStartSLOduration=1.449280706 podStartE2EDuration="1.449280706s" podCreationTimestamp="2025-12-01 20:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:07:43.449268205 +0000 UTC m=+1.143735810" watchObservedRunningTime="2025-12-01 20:07:43.449280706 +0000 UTC m=+1.143748311"
	Dec 01 20:07:43 embed-certs-990820 kubelet[1317]: I1201 20:07:43.475984    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-990820" podStartSLOduration=1.475960629 podStartE2EDuration="1.475960629s" podCreationTimestamp="2025-12-01 20:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:07:43.460953097 +0000 UTC m=+1.155420702" watchObservedRunningTime="2025-12-01 20:07:43.475960629 +0000 UTC m=+1.170428233"
	Dec 01 20:07:46 embed-certs-990820 kubelet[1317]: I1201 20:07:46.920433    1317 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 01 20:07:46 embed-certs-990820 kubelet[1317]: I1201 20:07:46.921061    1317 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 01 20:07:48 embed-certs-990820 kubelet[1317]: I1201 20:07:48.616697    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c4f1726-e033-43cf-8bd5-4f09a8761f82-kube-proxy\") pod \"kube-proxy-t2nmz\" (UID: \"1c4f1726-e033-43cf-8bd5-4f09a8761f82\") " pod="kube-system/kube-proxy-t2nmz"
	Dec 01 20:07:48 embed-certs-990820 kubelet[1317]: I1201 20:07:48.616752    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2549d\" (UniqueName: \"kubernetes.io/projected/1c4f1726-e033-43cf-8bd5-4f09a8761f82-kube-api-access-2549d\") pod \"kube-proxy-t2nmz\" (UID: \"1c4f1726-e033-43cf-8bd5-4f09a8761f82\") " pod="kube-system/kube-proxy-t2nmz"
	Dec 01 20:07:48 embed-certs-990820 kubelet[1317]: I1201 20:07:48.616782    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/75697d60-89d9-474b-97e4-98e1de47830d-cni-cfg\") pod \"kindnet-cpmn4\" (UID: \"75697d60-89d9-474b-97e4-98e1de47830d\") " pod="kube-system/kindnet-cpmn4"
	Dec 01 20:07:48 embed-certs-990820 kubelet[1317]: I1201 20:07:48.616863    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c4f1726-e033-43cf-8bd5-4f09a8761f82-xtables-lock\") pod \"kube-proxy-t2nmz\" (UID: \"1c4f1726-e033-43cf-8bd5-4f09a8761f82\") " pod="kube-system/kube-proxy-t2nmz"
	Dec 01 20:07:48 embed-certs-990820 kubelet[1317]: I1201 20:07:48.616911    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75697d60-89d9-474b-97e4-98e1de47830d-xtables-lock\") pod \"kindnet-cpmn4\" (UID: \"75697d60-89d9-474b-97e4-98e1de47830d\") " pod="kube-system/kindnet-cpmn4"
	Dec 01 20:07:48 embed-certs-990820 kubelet[1317]: I1201 20:07:48.616928    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75697d60-89d9-474b-97e4-98e1de47830d-lib-modules\") pod \"kindnet-cpmn4\" (UID: \"75697d60-89d9-474b-97e4-98e1de47830d\") " pod="kube-system/kindnet-cpmn4"
	Dec 01 20:07:48 embed-certs-990820 kubelet[1317]: I1201 20:07:48.616952    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5k4n\" (UniqueName: \"kubernetes.io/projected/75697d60-89d9-474b-97e4-98e1de47830d-kube-api-access-h5k4n\") pod \"kindnet-cpmn4\" (UID: \"75697d60-89d9-474b-97e4-98e1de47830d\") " pod="kube-system/kindnet-cpmn4"
	Dec 01 20:07:48 embed-certs-990820 kubelet[1317]: I1201 20:07:48.617010    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c4f1726-e033-43cf-8bd5-4f09a8761f82-lib-modules\") pod \"kube-proxy-t2nmz\" (UID: \"1c4f1726-e033-43cf-8bd5-4f09a8761f82\") " pod="kube-system/kube-proxy-t2nmz"
	Dec 01 20:07:49 embed-certs-990820 kubelet[1317]: I1201 20:07:49.443511    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cpmn4" podStartSLOduration=1.443484961 podStartE2EDuration="1.443484961s" podCreationTimestamp="2025-12-01 20:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:07:49.440473988 +0000 UTC m=+7.134941592" watchObservedRunningTime="2025-12-01 20:07:49.443484961 +0000 UTC m=+7.137952566"
	Dec 01 20:07:52 embed-certs-990820 kubelet[1317]: I1201 20:07:52.271949    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t2nmz" podStartSLOduration=4.271925423 podStartE2EDuration="4.271925423s" podCreationTimestamp="2025-12-01 20:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:07:49.456802805 +0000 UTC m=+7.151270410" watchObservedRunningTime="2025-12-01 20:07:52.271925423 +0000 UTC m=+9.966393028"
	Dec 01 20:07:59 embed-certs-990820 kubelet[1317]: I1201 20:07:59.817084    1317 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 01 20:07:59 embed-certs-990820 kubelet[1317]: I1201 20:07:59.898111    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/979d22fd-6b50-45b1-9ef0-6e1a932db5c2-tmp\") pod \"storage-provisioner\" (UID: \"979d22fd-6b50-45b1-9ef0-6e1a932db5c2\") " pod="kube-system/storage-provisioner"
	Dec 01 20:07:59 embed-certs-990820 kubelet[1317]: I1201 20:07:59.898178    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh8mw\" (UniqueName: \"kubernetes.io/projected/82add6c5-30ec-4a10-8643-ebfd6e4446b2-kube-api-access-mh8mw\") pod \"coredns-66bc5c9577-qngk9\" (UID: \"82add6c5-30ec-4a10-8643-ebfd6e4446b2\") " pod="kube-system/coredns-66bc5c9577-qngk9"
	Dec 01 20:07:59 embed-certs-990820 kubelet[1317]: I1201 20:07:59.898320    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl9qt\" (UniqueName: \"kubernetes.io/projected/979d22fd-6b50-45b1-9ef0-6e1a932db5c2-kube-api-access-zl9qt\") pod \"storage-provisioner\" (UID: \"979d22fd-6b50-45b1-9ef0-6e1a932db5c2\") " pod="kube-system/storage-provisioner"
	Dec 01 20:07:59 embed-certs-990820 kubelet[1317]: I1201 20:07:59.898366    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82add6c5-30ec-4a10-8643-ebfd6e4446b2-config-volume\") pod \"coredns-66bc5c9577-qngk9\" (UID: \"82add6c5-30ec-4a10-8643-ebfd6e4446b2\") " pod="kube-system/coredns-66bc5c9577-qngk9"
	Dec 01 20:08:00 embed-certs-990820 kubelet[1317]: I1201 20:08:00.482984    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qngk9" podStartSLOduration=12.482959419 podStartE2EDuration="12.482959419s" podCreationTimestamp="2025-12-01 20:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:08:00.467153863 +0000 UTC m=+18.161621452" watchObservedRunningTime="2025-12-01 20:08:00.482959419 +0000 UTC m=+18.177427023"
	Dec 01 20:08:00 embed-certs-990820 kubelet[1317]: I1201 20:08:00.483407    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.483387346 podStartE2EDuration="11.483387346s" podCreationTimestamp="2025-12-01 20:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:08:00.482655235 +0000 UTC m=+18.177122857" watchObservedRunningTime="2025-12-01 20:08:00.483387346 +0000 UTC m=+18.177854950"
	Dec 01 20:08:02 embed-certs-990820 kubelet[1317]: I1201 20:08:02.616107    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgnkm\" (UniqueName: \"kubernetes.io/projected/35055282-a717-479f-9f0b-454c174c024e-kube-api-access-kgnkm\") pod \"busybox\" (UID: \"35055282-a717-479f-9f0b-454c174c024e\") " pod="default/busybox"
	
	
	==> storage-provisioner [c42bc094ad6deb7b434b94026d1eaa96455f9f150d3ee2540a8a23312f65d1ed] <==
	I1201 20:08:00.230444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1201 20:08:00.243010       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1201 20:08:00.243176       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1201 20:08:00.246441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:00.255122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:08:00.255617       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1201 20:08:00.256450       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-990820_bc6f47e6-551b-4510-bcaf-9d4a5535f2d4!
	I1201 20:08:00.255803       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eba9bb8f-e5ee-4b48-8968-4ade718acf50", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-990820_bc6f47e6-551b-4510-bcaf-9d4a5535f2d4 became leader
	W1201 20:08:00.262736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:00.270364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:08:00.359191       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-990820_bc6f47e6-551b-4510-bcaf-9d4a5535f2d4!
	W1201 20:08:02.274380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:02.279868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:04.284019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:04.290554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:06.294166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:06.298809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:08.302398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:08.305961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:10.309868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:10.314874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-990820 -n embed-certs-990820
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-990820 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-217464 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-217464 --alsologtostderr -v=1: exit status 80 (1.809794575s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-217464 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:08:28.211669  354092 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:08:28.211821  354092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:28.211832  354092 out.go:374] Setting ErrFile to fd 2...
	I1201 20:08:28.211837  354092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:28.212153  354092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:08:28.212484  354092 out.go:368] Setting JSON to false
	I1201 20:08:28.212510  354092 mustload.go:66] Loading cluster: old-k8s-version-217464
	I1201 20:08:28.212990  354092 config.go:182] Loaded profile config "old-k8s-version-217464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1201 20:08:28.213588  354092 cli_runner.go:164] Run: docker container inspect old-k8s-version-217464 --format={{.State.Status}}
	I1201 20:08:28.239231  354092 host.go:66] Checking if "old-k8s-version-217464" exists ...
	I1201 20:08:28.239566  354092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:28.312190  354092 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:08:28.301113231 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:28.312987  354092 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764600683-21997/minikube-v1.37.0-1764600683-21997-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764600683-21997-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-217464 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1201 20:08:28.315931  354092 out.go:179] * Pausing node old-k8s-version-217464 ... 
	I1201 20:08:28.317093  354092 host.go:66] Checking if "old-k8s-version-217464" exists ...
	I1201 20:08:28.317398  354092 ssh_runner.go:195] Run: systemctl --version
	I1201 20:08:28.317438  354092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-217464
	I1201 20:08:28.338332  354092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/old-k8s-version-217464/id_rsa Username:docker}
	I1201 20:08:28.442178  354092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:08:28.456804  354092 pause.go:52] kubelet running: true
	I1201 20:08:28.456888  354092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:08:28.704837  354092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:08:28.705198  354092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:08:28.797433  354092 cri.go:89] found id: "4dd51eba90da2b0039140bea7a61cb891af1da1e61a21339fb4f21afc50fb187"
	I1201 20:08:28.797465  354092 cri.go:89] found id: "5abb611d7e9f7dcb22802c91729f1a178ed173a48b43625dc06b56faba224150"
	I1201 20:08:28.797472  354092 cri.go:89] found id: "36d96bfb7d9320bba36f604a96f0cf8192ff4654d2d9ddf86407363967e92dbe"
	I1201 20:08:28.797477  354092 cri.go:89] found id: "ecbcc841645dbe266d12b72d95aeff1393b6a4de72113d3f968cd8e953351ccc"
	I1201 20:08:28.797482  354092 cri.go:89] found id: "9d50552004acc3398e94380698eb07bb142aa3e02f7fbe0cc985eae7f0f37421"
	I1201 20:08:28.797486  354092 cri.go:89] found id: "50a711978543faddbcd266e3bb43a6bebfd689f26e2a35fcfedb4e228ede9591"
	I1201 20:08:28.797491  354092 cri.go:89] found id: "4649c73be5eb94a99d98990312bb2e4e017cd402e18aca29e4f14aacf404c25f"
	I1201 20:08:28.797495  354092 cri.go:89] found id: "604c30dbad503e870547eb7624c394a7a220a65ecf82f3dccc6f24eca1a93428"
	I1201 20:08:28.797515  354092 cri.go:89] found id: "2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285"
	I1201 20:08:28.797522  354092 cri.go:89] found id: "e526c3a481a4f37409004fed70aaf4eed35df27e652d4df6f5e79f21b30ab3ac"
	I1201 20:08:28.797527  354092 cri.go:89] found id: ""
	I1201 20:08:28.797570  354092 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:08:28.813231  354092 retry.go:31] will retry after 151.306045ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:08:28Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:08:28.965879  354092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:08:28.993194  354092 pause.go:52] kubelet running: false
	I1201 20:08:28.993438  354092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:08:29.200431  354092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:08:29.200525  354092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:08:29.285343  354092 cri.go:89] found id: "4dd51eba90da2b0039140bea7a61cb891af1da1e61a21339fb4f21afc50fb187"
	I1201 20:08:29.285367  354092 cri.go:89] found id: "5abb611d7e9f7dcb22802c91729f1a178ed173a48b43625dc06b56faba224150"
	I1201 20:08:29.285374  354092 cri.go:89] found id: "36d96bfb7d9320bba36f604a96f0cf8192ff4654d2d9ddf86407363967e92dbe"
	I1201 20:08:29.285379  354092 cri.go:89] found id: "ecbcc841645dbe266d12b72d95aeff1393b6a4de72113d3f968cd8e953351ccc"
	I1201 20:08:29.285382  354092 cri.go:89] found id: "9d50552004acc3398e94380698eb07bb142aa3e02f7fbe0cc985eae7f0f37421"
	I1201 20:08:29.285385  354092 cri.go:89] found id: "50a711978543faddbcd266e3bb43a6bebfd689f26e2a35fcfedb4e228ede9591"
	I1201 20:08:29.285388  354092 cri.go:89] found id: "4649c73be5eb94a99d98990312bb2e4e017cd402e18aca29e4f14aacf404c25f"
	I1201 20:08:29.285391  354092 cri.go:89] found id: "604c30dbad503e870547eb7624c394a7a220a65ecf82f3dccc6f24eca1a93428"
	I1201 20:08:29.285394  354092 cri.go:89] found id: "2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285"
	I1201 20:08:29.285405  354092 cri.go:89] found id: "e526c3a481a4f37409004fed70aaf4eed35df27e652d4df6f5e79f21b30ab3ac"
	I1201 20:08:29.285409  354092 cri.go:89] found id: ""
	I1201 20:08:29.285456  354092 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:08:29.299143  354092 retry.go:31] will retry after 370.533028ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:08:29Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:08:29.670837  354092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:08:29.686712  354092 pause.go:52] kubelet running: false
	I1201 20:08:29.686774  354092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:08:29.854755  354092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:08:29.854834  354092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:08:29.930524  354092 cri.go:89] found id: "4dd51eba90da2b0039140bea7a61cb891af1da1e61a21339fb4f21afc50fb187"
	I1201 20:08:29.930545  354092 cri.go:89] found id: "5abb611d7e9f7dcb22802c91729f1a178ed173a48b43625dc06b56faba224150"
	I1201 20:08:29.930549  354092 cri.go:89] found id: "36d96bfb7d9320bba36f604a96f0cf8192ff4654d2d9ddf86407363967e92dbe"
	I1201 20:08:29.930552  354092 cri.go:89] found id: "ecbcc841645dbe266d12b72d95aeff1393b6a4de72113d3f968cd8e953351ccc"
	I1201 20:08:29.930555  354092 cri.go:89] found id: "9d50552004acc3398e94380698eb07bb142aa3e02f7fbe0cc985eae7f0f37421"
	I1201 20:08:29.930559  354092 cri.go:89] found id: "50a711978543faddbcd266e3bb43a6bebfd689f26e2a35fcfedb4e228ede9591"
	I1201 20:08:29.930562  354092 cri.go:89] found id: "4649c73be5eb94a99d98990312bb2e4e017cd402e18aca29e4f14aacf404c25f"
	I1201 20:08:29.930564  354092 cri.go:89] found id: "604c30dbad503e870547eb7624c394a7a220a65ecf82f3dccc6f24eca1a93428"
	I1201 20:08:29.930567  354092 cri.go:89] found id: "2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285"
	I1201 20:08:29.930589  354092 cri.go:89] found id: "e526c3a481a4f37409004fed70aaf4eed35df27e652d4df6f5e79f21b30ab3ac"
	I1201 20:08:29.930592  354092 cri.go:89] found id: ""
	I1201 20:08:29.930627  354092 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:08:29.946989  354092 out.go:203] 
	W1201 20:08:29.948408  354092 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:08:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:08:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:08:29.948444  354092 out.go:285] * 
	* 
	W1201 20:08:29.953532  354092 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:08:29.954851  354092 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-217464 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-217464
helpers_test.go:243: (dbg) docker inspect old-k8s-version-217464:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9",
	        "Created": "2025-12-01T20:06:33.460541938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 345647,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:07:49.618124237Z",
	            "FinishedAt": "2025-12-01T20:07:46.563763895Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9/hostname",
	        "HostsPath": "/var/lib/docker/containers/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9/hosts",
	        "LogPath": "/var/lib/docker/containers/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9-json.log",
	        "Name": "/old-k8s-version-217464",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-217464:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-217464",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9",
	                "LowerDir": "/var/lib/docker/overlay2/859c98cc16a359e96cd78ec206ea4e499d484fbb745b17501e76564dce7c678d-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/859c98cc16a359e96cd78ec206ea4e499d484fbb745b17501e76564dce7c678d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/859c98cc16a359e96cd78ec206ea4e499d484fbb745b17501e76564dce7c678d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/859c98cc16a359e96cd78ec206ea4e499d484fbb745b17501e76564dce7c678d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-217464",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-217464/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-217464",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-217464",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-217464",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "54b0ef24bb1d6f5f0e03e3d8496751b6ca51023715f03e394a30adb4b6eacf24",
	            "SandboxKey": "/var/run/docker/netns/54b0ef24bb1d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-217464": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1d25a2ef13e6f84d7dee9dd1a8ffb7c5ebd5713411470cffa733d6c3a1a597a",
	                    "EndpointID": "d64e441cfe130770075eb02e93a72afa372d90d1f7901ccb57fa19805c32e4d5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ce:bb:97:9d:89:de",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-217464",
	                        "e59219b4cc96"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-217464 -n old-k8s-version-217464
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-217464 -n old-k8s-version-217464: exit status 2 (371.698812ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-217464 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-217464 logs -n 25: (1.202342895s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-551864 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo containerd config dump                                                                                                                                                                                                  │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo crio config                                                                                                                                                                                                             │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p bridge-551864                                                                                                                                                                                                                              │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-003720                                                                                                                                                                                                               │ disable-driver-mounts-003720 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-217464 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p old-k8s-version-217464 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-240359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p no-preload-240359 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-990820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p embed-certs-990820 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p no-preload-240359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ image   │ old-k8s-version-217464 image list --format=json                                                                                                                                                                                               │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ pause   │ -p old-k8s-version-217464 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-990820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:08:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:08:28.477537  354303 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:08:28.477626  354303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:28.477632  354303 out.go:374] Setting ErrFile to fd 2...
	I1201 20:08:28.477637  354303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:28.477827  354303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:08:28.478234  354303 out.go:368] Setting JSON to false
	I1201 20:08:28.479648  354303 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6659,"bootTime":1764613049,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:08:28.479727  354303 start.go:143] virtualization: kvm guest
	I1201 20:08:28.481854  354303 out.go:179] * [embed-certs-990820] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:08:28.483774  354303 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:08:28.483785  354303 notify.go:221] Checking for updates...
	I1201 20:08:28.485966  354303 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:08:28.487075  354303 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:28.488125  354303 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:08:28.490714  354303 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:08:28.494461  354303 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:08:28.496133  354303 config.go:182] Loaded profile config "embed-certs-990820": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:08:28.496872  354303 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:08:28.537089  354303 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:08:28.537195  354303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:28.619437  354303 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:08:28.601656972 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:28.619588  354303 docker.go:319] overlay module found
	I1201 20:08:28.623382  354303 out.go:179] * Using the docker driver based on existing profile
	I1201 20:08:28.624440  354303 start.go:309] selected driver: docker
	I1201 20:08:28.624455  354303 start.go:927] validating driver "docker" against &{Name:embed-certs-990820 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-990820 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:28.624559  354303 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:08:28.625273  354303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:28.724819  354303 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:08:28.710117278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:28.725173  354303 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:08:28.725212  354303 cni.go:84] Creating CNI manager for ""
	I1201 20:08:28.725327  354303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:28.725393  354303 start.go:353] cluster config:
	{Name:embed-certs-990820 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-990820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:28.727432  354303 out.go:179] * Starting "embed-certs-990820" primary control-plane node in "embed-certs-990820" cluster
	I1201 20:08:28.728767  354303 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:08:28.729983  354303 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:08:28.731353  354303 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:08:28.731391  354303 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 20:08:28.731399  354303 cache.go:65] Caching tarball of preloaded images
	I1201 20:08:28.731490  354303 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:08:28.731498  354303 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:08:28.731587  354303 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/config.json ...
	I1201 20:08:28.731658  354303 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:08:28.759672  354303 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:08:28.759701  354303 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:08:28.759721  354303 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:08:28.759756  354303 start.go:360] acquireMachinesLock for embed-certs-990820: {Name:mk0308557d4346623fb3193dcae3b8f2c186483d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:08:28.759830  354303 start.go:364] duration metric: took 48.101µs to acquireMachinesLock for "embed-certs-990820"
	I1201 20:08:28.759851  354303 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:08:28.759861  354303 fix.go:54] fixHost starting: 
	I1201 20:08:28.760161  354303 cli_runner.go:164] Run: docker container inspect embed-certs-990820 --format={{.State.Status}}
	I1201 20:08:28.783427  354303 fix.go:112] recreateIfNeeded on embed-certs-990820: state=Stopped err=<nil>
	W1201 20:08:28.783460  354303 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 20:08:27.643530  352497 cli_runner.go:164] Run: docker network inspect no-preload-240359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:08:27.662423  352497 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1201 20:08:27.666732  352497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:08:27.677834  352497 kubeadm.go:884] updating cluster {Name:no-preload-240359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-240359 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:08:27.677959  352497 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:08:27.677993  352497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:08:27.712719  352497 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:08:27.712742  352497 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:08:27.712751  352497 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:08:27.712867  352497 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-240359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-240359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:08:27.712963  352497 ssh_runner.go:195] Run: crio config
	I1201 20:08:27.772580  352497 cni.go:84] Creating CNI manager for ""
	I1201 20:08:27.772666  352497 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:27.772704  352497 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:08:27.772740  352497 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-240359 NodeName:no-preload-240359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:08:27.772885  352497 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-240359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:08:27.772964  352497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:08:27.783386  352497 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:08:27.783477  352497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:08:27.793367  352497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:08:27.808748  352497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:08:27.823123  352497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1201 20:08:27.838818  352497 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:08:27.843068  352497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:08:27.855122  352497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:08:27.958473  352497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:08:27.982144  352497 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359 for IP: 192.168.85.2
	I1201 20:08:27.982163  352497 certs.go:195] generating shared ca certs ...
	I1201 20:08:27.982181  352497 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:27.982340  352497 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:08:27.982401  352497 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:08:27.982414  352497 certs.go:257] generating profile certs ...
	I1201 20:08:27.982519  352497 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/client.key
	I1201 20:08:27.982608  352497 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.key.e236d75c
	I1201 20:08:27.982668  352497 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.key
	I1201 20:08:27.982803  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:08:27.982845  352497 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:08:27.982860  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:08:27.982897  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:08:27.982938  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:08:27.982982  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:08:27.983043  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:08:27.983729  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:08:28.004982  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:08:28.025922  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:08:28.058620  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:08:28.103045  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:08:28.132413  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:08:28.166561  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:08:28.185942  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:08:28.206527  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:08:28.228846  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:08:28.252449  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:08:28.281454  352497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:08:28.298144  352497 ssh_runner.go:195] Run: openssl version
	I1201 20:08:28.305187  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:08:28.314813  352497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:28.318879  352497 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:28.318925  352497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:28.358724  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:08:28.368184  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:08:28.378620  352497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:08:28.382761  352497 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:08:28.382803  352497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:08:28.419263  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:08:28.428910  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:08:28.438671  352497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:08:28.442781  352497 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:08:28.442833  352497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:08:28.482005  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:08:28.491828  352497 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:08:28.496205  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:08:28.556754  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:08:28.617794  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:08:28.678413  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:08:28.740451  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:08:28.796812  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:08:28.855649  352497 kubeadm.go:401] StartCluster: {Name:no-preload-240359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-240359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:28.855725  352497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:08:28.855768  352497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:08:28.892115  352497 cri.go:89] found id: "6b752f5fa5d255e1175b4bd1269edc34ac8b33b4ccd5fd8ef5ee42c1138e4140"
	I1201 20:08:28.892173  352497 cri.go:89] found id: "e49b2d4ba56ef1c2e40ddb43da58758bdbf5d919d3c69e15fb12ddd94e3859e6"
	I1201 20:08:28.892180  352497 cri.go:89] found id: "29cdf919857836c121bb0ca4a31dd8000e82c51bc59f779d45be989f90169f51"
	I1201 20:08:28.892186  352497 cri.go:89] found id: "36005a70764f454efe8261a6e2c055592d11b2995f54692acfa06be75c01e231"
	I1201 20:08:28.892191  352497 cri.go:89] found id: ""
	I1201 20:08:28.892256  352497 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:08:28.913619  352497 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:08:28Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:08:28.913757  352497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:08:28.928853  352497 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:08:28.928874  352497 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:08:28.928929  352497 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:08:28.939353  352497 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:08:28.940034  352497 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-240359" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:28.940417  352497 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-240359" cluster setting kubeconfig missing "no-preload-240359" context setting]
	I1201 20:08:28.940959  352497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:28.942361  352497 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:08:28.954775  352497 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1201 20:08:28.954807  352497 kubeadm.go:602] duration metric: took 25.927718ms to restartPrimaryControlPlane
	I1201 20:08:28.954817  352497 kubeadm.go:403] duration metric: took 99.177392ms to StartCluster
	I1201 20:08:28.954834  352497 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:28.954908  352497 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:28.956103  352497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:28.956326  352497 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:08:28.956456  352497 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:08:28.956582  352497 addons.go:70] Setting storage-provisioner=true in profile "no-preload-240359"
	I1201 20:08:28.956588  352497 config.go:182] Loaded profile config "no-preload-240359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:08:28.956609  352497 addons.go:239] Setting addon storage-provisioner=true in "no-preload-240359"
	I1201 20:08:28.956602  352497 addons.go:70] Setting dashboard=true in profile "no-preload-240359"
	W1201 20:08:28.956619  352497 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:08:28.956628  352497 addons.go:239] Setting addon dashboard=true in "no-preload-240359"
	W1201 20:08:28.956638  352497 addons.go:248] addon dashboard should already be in state true
	I1201 20:08:28.956643  352497 addons.go:70] Setting default-storageclass=true in profile "no-preload-240359"
	I1201 20:08:28.956653  352497 host.go:66] Checking if "no-preload-240359" exists ...
	I1201 20:08:28.956657  352497 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-240359"
	I1201 20:08:28.956668  352497 host.go:66] Checking if "no-preload-240359" exists ...
	I1201 20:08:28.956880  352497 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:08:28.957133  352497 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:08:28.957134  352497 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:08:28.958590  352497 out.go:179] * Verifying Kubernetes components...
	I1201 20:08:28.960227  352497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:08:28.988202  352497 addons.go:239] Setting addon default-storageclass=true in "no-preload-240359"
	W1201 20:08:28.988226  352497 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:08:28.988254  352497 host.go:66] Checking if "no-preload-240359" exists ...
	I1201 20:08:28.988842  352497 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:08:28.989083  352497 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:08:28.989087  352497 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:08:28.990473  352497 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:08:28.990492  352497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:08:28.990714  352497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-240359
	I1201 20:08:28.991820  352497 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Dec 01 20:08:15 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:15.307902549Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c05b6a727181822c29a44768573a9421df9ed62dfa43e6a766784f7c77692d9b/merged/etc/group: no such file or directory"
	Dec 01 20:08:15 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:15.308330663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:15 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:15.338414366Z" level=info msg="Created container e526c3a481a4f37409004fed70aaf4eed35df27e652d4df6f5e79f21b30ab3ac: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pfxh9/kubernetes-dashboard" id=e393f827-0074-4470-bf37-a46389eccb7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:15 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:15.338963619Z" level=info msg="Starting container: e526c3a481a4f37409004fed70aaf4eed35df27e652d4df6f5e79f21b30ab3ac" id=3791e3c1-a8f3-43ff-a8cf-a1141309ea71 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:08:15 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:15.340721099Z" level=info msg="Started container" PID=1525 containerID=e526c3a481a4f37409004fed70aaf4eed35df27e652d4df6f5e79f21b30ab3ac description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pfxh9/kubernetes-dashboard id=3791e3c1-a8f3-43ff-a8cf-a1141309ea71 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a1fe0b0e6512df10b1c36eea59afcd8fc1d633405b73eb3845bbd65e57b6878f
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.365369661Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=92f8635d-5d72-436e-85be-7bd6cd9f99e1 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.36614855Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67f863a5-32f9-4e42-9652-6a9f73ed73b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.368527672Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper" id=182d18da-802d-48a2-8337-fdefe428a709 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.36863344Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.37516729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.375699343Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.406818418Z" level=info msg="Created container 83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper" id=182d18da-802d-48a2-8337-fdefe428a709 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.407463108Z" level=info msg="Starting container: 83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed" id=058fc660-19fb-436a-9220-f03a4cff4be8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.409099646Z" level=info msg="Started container" PID=1753 containerID=83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper id=058fc660-19fb-436a-9220-f03a4cff4be8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b40270d9a95551ea930bbba4286b608e79ac69c117aace0a6f499d28324fc76
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.974807646Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=261f8d14-bf0e-424e-9f45-657aa077b925 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.977698995Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a4b7bb23-ad40-448a-a555-a62b0d86c566 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.980913418Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper" id=dec29c36-0ea3-4ce1-a16f-9b2735f34c7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.981044369Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.988732739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.989532789Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:18 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:18.015409297Z" level=info msg="Created container 2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper" id=dec29c36-0ea3-4ce1-a16f-9b2735f34c7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:18 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:18.015942647Z" level=info msg="Starting container: 2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285" id=c015908c-2313-44de-862c-7a2bf0807e92 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:08:18 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:18.017981083Z" level=info msg="Started container" PID=1764 containerID=2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper id=c015908c-2313-44de-862c-7a2bf0807e92 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b40270d9a95551ea930bbba4286b608e79ac69c117aace0a6f499d28324fc76
	Dec 01 20:08:18 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:18.979807208Z" level=info msg="Removing container: 83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed" id=c2f6733c-72cd-41bb-b3a0-8e39694ed95d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:08:18 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:18.988642063Z" level=info msg="Removed container 83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper" id=c2f6733c-72cd-41bb-b3a0-8e39694ed95d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2136aec85187f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   1                   0b40270d9a955       dashboard-metrics-scraper-5f989dc9cf-gj8zq       kubernetes-dashboard
	e526c3a481a4f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   15 seconds ago      Running             kubernetes-dashboard        0                   a1fe0b0e6512d       kubernetes-dashboard-8694d4445c-pfxh9            kubernetes-dashboard
	4dd51eba90da2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           30 seconds ago      Running             coredns                     0                   b1c4501b4eaf9       coredns-5dd5756b68-jpv6h                         kube-system
	95bd7159ce5a2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           30 seconds ago      Running             busybox                     1                   24b980a04d088       busybox                                          default
	5abb611d7e9f7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           30 seconds ago      Running             kindnet-cni                 0                   0e464b5f0ea3d       kindnet-x9tkl                                    kube-system
	36d96bfb7d932       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           30 seconds ago      Running             kube-proxy                  0                   d0f1bf9bee9d1       kube-proxy-fjhhh                                 kube-system
	ecbcc841645db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           30 seconds ago      Exited              storage-provisioner         0                   f29924e3a3176       storage-provisioner                              kube-system
	9d50552004acc       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           34 seconds ago      Running             etcd                        0                   037331d240a2d       etcd-old-k8s-version-217464                      kube-system
	50a711978543f       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           34 seconds ago      Running             kube-scheduler              0                   b75d4ab87e199       kube-scheduler-old-k8s-version-217464            kube-system
	4649c73be5eb9       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           34 seconds ago      Running             kube-apiserver              0                   d4d22b070698f       kube-apiserver-old-k8s-version-217464            kube-system
	604c30dbad503       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           34 seconds ago      Running             kube-controller-manager     0                   e397ada9f06c4       kube-controller-manager-old-k8s-version-217464   kube-system
	
	
	==> coredns [4dd51eba90da2b0039140bea7a61cb891af1da1e61a21339fb4f21afc50fb187] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38871 - 14814 "HINFO IN 5245048074702711661.8598468947235307428. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029405188s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-217464
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-217464
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=old-k8s-version-217464
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_06_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:06:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-217464
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:08:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:07:59 +0000   Mon, 01 Dec 2025 20:06:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:07:59 +0000   Mon, 01 Dec 2025 20:06:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:07:59 +0000   Mon, 01 Dec 2025 20:06:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:07:59 +0000   Mon, 01 Dec 2025 20:07:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-217464
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                ed847e9c-b6d4-4f47-a0ed-41ae4070a3c6
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 coredns-5dd5756b68-jpv6h                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     89s
	  kube-system                 etcd-old-k8s-version-217464                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         102s
	  kube-system                 kindnet-x9tkl                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      89s
	  kube-system                 kube-apiserver-old-k8s-version-217464             250m (3%)     0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-controller-manager-old-k8s-version-217464    200m (2%)     0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-proxy-fjhhh                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-old-k8s-version-217464             100m (1%)     0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-gj8zq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-pfxh9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  Starting                 30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  107s (x8 over 108s)  kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 108s)  kubelet          Node old-k8s-version-217464 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x8 over 108s)  kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  102s                 kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s                 kubelet          Node old-k8s-version-217464 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           90s                  node-controller  Node old-k8s-version-217464 event: Registered Node old-k8s-version-217464 in Controller
	  Normal  NodeReady                76s                  kubelet          Node old-k8s-version-217464 status is now: NodeReady
	  Normal  Starting                 36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)    kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)    kubelet          Node old-k8s-version-217464 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)    kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20s                  node-controller  Node old-k8s-version-217464 event: Registered Node old-k8s-version-217464 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [9d50552004acc3398e94380698eb07bb142aa3e02f7fbe0cc985eae7f0f37421] <==
	{"level":"info","ts":"2025-12-01T20:07:56.475401Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-01T20:07:56.475492Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-01T20:07:56.476089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-01T20:07:56.476273Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-01T20:07:56.476592Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-01T20:07:56.476663Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-01T20:07:56.477493Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-01T20:07:56.477618Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-01T20:07:56.479743Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-01T20:07:56.477739Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-01T20:07:56.477777Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-01T20:07:58.265913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-01T20:07:58.265957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-01T20:07:58.265972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-01T20:07:58.265985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-01T20:07:58.265991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-01T20:07:58.265999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-01T20:07:58.266006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-01T20:07:58.267181Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-01T20:07:58.267191Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-217464 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-01T20:07:58.267201Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-01T20:07:58.267439Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-01T20:07:58.267484Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-01T20:07:58.268519Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-01T20:07:58.268514Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 20:08:31 up  1:51,  0 user,  load average: 3.47, 3.16, 2.26
	Linux old-k8s-version-217464 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5abb611d7e9f7dcb22802c91729f1a178ed173a48b43625dc06b56faba224150] <==
	I1201 20:08:00.555615       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:08:00.555943       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1201 20:08:00.556146       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:08:00.556168       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:08:00.556193       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:08:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:08:00.758321       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:08:00.758421       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:08:00.758434       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:08:00.758561       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:08:01.158497       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:08:01.158523       1 metrics.go:72] Registering metrics
	I1201 20:08:01.158589       1 controller.go:711] "Syncing nftables rules"
	I1201 20:08:10.668395       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1201 20:08:10.668449       1 main.go:301] handling current node
	I1201 20:08:20.666703       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1201 20:08:20.666755       1 main.go:301] handling current node
	I1201 20:08:30.675553       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1201 20:08:30.675593       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4649c73be5eb94a99d98990312bb2e4e017cd402e18aca29e4f14aacf404c25f] <==
	I1201 20:07:59.344190       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1201 20:07:59.344328       1 shared_informer.go:318] Caches are synced for configmaps
	I1201 20:07:59.344410       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1201 20:07:59.345538       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1201 20:07:59.346781       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1201 20:07:59.346852       1 aggregator.go:166] initial CRD sync complete...
	I1201 20:07:59.346866       1 autoregister_controller.go:141] Starting autoregister controller
	I1201 20:07:59.346873       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:07:59.346880       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:07:59.349070       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1201 20:07:59.355145       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1201 20:07:59.370943       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:08:00.216707       1 controller.go:624] quota admission added evaluator for: namespaces
	I1201 20:08:00.253892       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:08:00.292666       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1201 20:08:00.315118       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:08:00.324546       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:08:00.332330       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1201 20:08:00.384667       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.142.205"}
	I1201 20:08:00.405249       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.107.8"}
	I1201 20:08:11.526587       1 controller.go:624] quota admission added evaluator for: endpoints
	I1201 20:08:11.526648       1 controller.go:624] quota admission added evaluator for: endpoints
	I1201 20:08:11.550916       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:08:11.550918       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:08:11.574056       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [604c30dbad503e870547eb7624c394a7a220a65ecf82f3dccc6f24eca1a93428] <==
	I1201 20:08:11.605037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="27.741451ms"
	I1201 20:08:11.606570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.773699ms"
	I1201 20:08:11.606688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.147µs"
	I1201 20:08:11.609940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="77.181µs"
	I1201 20:08:11.610382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="4.938557ms"
	I1201 20:08:11.610495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.393µs"
	I1201 20:08:11.610544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="31.434µs"
	I1201 20:08:11.617600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.253µs"
	I1201 20:08:11.622822       1 shared_informer.go:318] Caches are synced for PVC protection
	I1201 20:08:11.622837       1 shared_informer.go:318] Caches are synced for persistent volume
	I1201 20:08:11.631052       1 shared_informer.go:318] Caches are synced for attach detach
	I1201 20:08:11.633314       1 shared_informer.go:318] Caches are synced for expand
	I1201 20:08:11.652719       1 shared_informer.go:318] Caches are synced for namespace
	I1201 20:08:11.686563       1 shared_informer.go:318] Caches are synced for ephemeral
	I1201 20:08:11.697956       1 shared_informer.go:318] Caches are synced for stateful set
	I1201 20:08:11.760561       1 shared_informer.go:318] Caches are synced for resource quota
	I1201 20:08:11.770818       1 shared_informer.go:318] Caches are synced for resource quota
	I1201 20:08:12.096605       1 shared_informer.go:318] Caches are synced for garbage collector
	I1201 20:08:12.159856       1 shared_informer.go:318] Caches are synced for garbage collector
	I1201 20:08:12.159895       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1201 20:08:15.996980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.895102ms"
	I1201 20:08:15.997095       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.735µs"
	I1201 20:08:17.987552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.243µs"
	I1201 20:08:18.989854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.261µs"
	I1201 20:08:19.992171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.077µs"
	
	
	==> kube-proxy [36d96bfb7d9320bba36f604a96f0cf8192ff4654d2d9ddf86407363967e92dbe] <==
	I1201 20:08:00.327851       1 server_others.go:69] "Using iptables proxy"
	I1201 20:08:00.338486       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1201 20:08:00.381700       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:08:00.385341       1 server_others.go:152] "Using iptables Proxier"
	I1201 20:08:00.385385       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1201 20:08:00.385395       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1201 20:08:00.385434       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1201 20:08:00.385769       1 server.go:846] "Version info" version="v1.28.0"
	I1201 20:08:00.385788       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:08:00.387584       1 config.go:97] "Starting endpoint slice config controller"
	I1201 20:08:00.387614       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1201 20:08:00.387649       1 config.go:188] "Starting service config controller"
	I1201 20:08:00.387654       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1201 20:08:00.388227       1 config.go:315] "Starting node config controller"
	I1201 20:08:00.388248       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1201 20:08:00.487936       1 shared_informer.go:318] Caches are synced for service config
	I1201 20:08:00.487980       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1201 20:08:00.488584       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [50a711978543faddbcd266e3bb43a6bebfd689f26e2a35fcfedb4e228ede9591] <==
	I1201 20:07:56.957257       1 serving.go:348] Generated self-signed cert in-memory
	I1201 20:07:59.321860       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1201 20:07:59.321883       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:07:59.327734       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1201 20:07:59.330397       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1201 20:07:59.332409       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1201 20:07:59.330435       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:07:59.333538       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1201 20:07:59.330457       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1201 20:07:59.334344       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1201 20:07:59.330475       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1201 20:07:59.434477       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1201 20:07:59.434478       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1201 20:07:59.434478       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 01 20:07:59 old-k8s-version-217464 kubelet[732]: I1201 20:07:59.975075     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12564231-f1d8-4991-b32e-478ee1e61837-xtables-lock\") pod \"kube-proxy-fjhhh\" (UID: \"12564231-f1d8-4991-b32e-478ee1e61837\") " pod="kube-system/kube-proxy-fjhhh"
	Dec 01 20:07:59 old-k8s-version-217464 kubelet[732]: I1201 20:07:59.975739     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12564231-f1d8-4991-b32e-478ee1e61837-lib-modules\") pod \"kube-proxy-fjhhh\" (UID: \"12564231-f1d8-4991-b32e-478ee1e61837\") " pod="kube-system/kube-proxy-fjhhh"
	Dec 01 20:07:59 old-k8s-version-217464 kubelet[732]: I1201 20:07:59.975858     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baa3c072-c4e8-4d7c-ad9f-7ee7461ea900-xtables-lock\") pod \"kindnet-x9tkl\" (UID: \"baa3c072-c4e8-4d7c-ad9f-7ee7461ea900\") " pod="kube-system/kindnet-x9tkl"
	Dec 01 20:07:59 old-k8s-version-217464 kubelet[732]: I1201 20:07:59.975890     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baa3c072-c4e8-4d7c-ad9f-7ee7461ea900-lib-modules\") pod \"kindnet-x9tkl\" (UID: \"baa3c072-c4e8-4d7c-ad9f-7ee7461ea900\") " pod="kube-system/kindnet-x9tkl"
	Dec 01 20:08:08 old-k8s-version-217464 kubelet[732]: I1201 20:08:08.212922     732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 01 20:08:11 old-k8s-version-217464 kubelet[732]: I1201 20:08:11.598224     732 topology_manager.go:215] "Topology Admit Handler" podUID="c9b2eed4-9d0b-4f54-8c25-d864a3b6f855" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-pfxh9"
	Dec 01 20:08:11 old-k8s-version-217464 kubelet[732]: I1201 20:08:11.602477     732 topology_manager.go:215] "Topology Admit Handler" podUID="5f9b023f-08b3-40cf-9ad7-21b541515595" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-gj8zq"
	Dec 01 20:08:11 old-k8s-version-217464 kubelet[732]: I1201 20:08:11.737760     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c9b2eed4-9d0b-4f54-8c25-d864a3b6f855-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-pfxh9\" (UID: \"c9b2eed4-9d0b-4f54-8c25-d864a3b6f855\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pfxh9"
	Dec 01 20:08:11 old-k8s-version-217464 kubelet[732]: I1201 20:08:11.737820     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74cck\" (UniqueName: \"kubernetes.io/projected/c9b2eed4-9d0b-4f54-8c25-d864a3b6f855-kube-api-access-74cck\") pod \"kubernetes-dashboard-8694d4445c-pfxh9\" (UID: \"c9b2eed4-9d0b-4f54-8c25-d864a3b6f855\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pfxh9"
	Dec 01 20:08:11 old-k8s-version-217464 kubelet[732]: I1201 20:08:11.737961     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5f9b023f-08b3-40cf-9ad7-21b541515595-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-gj8zq\" (UID: \"5f9b023f-08b3-40cf-9ad7-21b541515595\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq"
	Dec 01 20:08:11 old-k8s-version-217464 kubelet[732]: I1201 20:08:11.738015     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hzf6\" (UniqueName: \"kubernetes.io/projected/5f9b023f-08b3-40cf-9ad7-21b541515595-kube-api-access-7hzf6\") pod \"dashboard-metrics-scraper-5f989dc9cf-gj8zq\" (UID: \"5f9b023f-08b3-40cf-9ad7-21b541515595\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq"
	Dec 01 20:08:15 old-k8s-version-217464 kubelet[732]: I1201 20:08:15.982204     732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pfxh9" podStartSLOduration=1.602993443 podCreationTimestamp="2025-12-01 20:08:11 +0000 UTC" firstStartedPulling="2025-12-01 20:08:11.92095908 +0000 UTC m=+16.124537789" lastFinishedPulling="2025-12-01 20:08:15.300095914 +0000 UTC m=+19.503674620" observedRunningTime="2025-12-01 20:08:15.981929135 +0000 UTC m=+20.185507849" watchObservedRunningTime="2025-12-01 20:08:15.982130274 +0000 UTC m=+20.185708990"
	Dec 01 20:08:17 old-k8s-version-217464 kubelet[732]: I1201 20:08:17.974363     732 scope.go:117] "RemoveContainer" containerID="83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed"
	Dec 01 20:08:18 old-k8s-version-217464 kubelet[732]: I1201 20:08:18.978498     732 scope.go:117] "RemoveContainer" containerID="83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed"
	Dec 01 20:08:18 old-k8s-version-217464 kubelet[732]: I1201 20:08:18.978622     732 scope.go:117] "RemoveContainer" containerID="2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285"
	Dec 01 20:08:18 old-k8s-version-217464 kubelet[732]: E1201 20:08:18.978983     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gj8zq_kubernetes-dashboard(5f9b023f-08b3-40cf-9ad7-21b541515595)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq" podUID="5f9b023f-08b3-40cf-9ad7-21b541515595"
	Dec 01 20:08:19 old-k8s-version-217464 kubelet[732]: I1201 20:08:19.982448     732 scope.go:117] "RemoveContainer" containerID="2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285"
	Dec 01 20:08:19 old-k8s-version-217464 kubelet[732]: E1201 20:08:19.982745     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gj8zq_kubernetes-dashboard(5f9b023f-08b3-40cf-9ad7-21b541515595)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq" podUID="5f9b023f-08b3-40cf-9ad7-21b541515595"
	Dec 01 20:08:21 old-k8s-version-217464 kubelet[732]: I1201 20:08:21.905066     732 scope.go:117] "RemoveContainer" containerID="2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285"
	Dec 01 20:08:21 old-k8s-version-217464 kubelet[732]: E1201 20:08:21.905488     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gj8zq_kubernetes-dashboard(5f9b023f-08b3-40cf-9ad7-21b541515595)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq" podUID="5f9b023f-08b3-40cf-9ad7-21b541515595"
	Dec 01 20:08:28 old-k8s-version-217464 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 20:08:28 old-k8s-version-217464 kubelet[732]: I1201 20:08:28.684722     732 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 01 20:08:28 old-k8s-version-217464 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 20:08:28 old-k8s-version-217464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 20:08:28 old-k8s-version-217464 systemd[1]: kubelet.service: Consumed 1.135s CPU time.
	
	
	==> kubernetes-dashboard [e526c3a481a4f37409004fed70aaf4eed35df27e652d4df6f5e79f21b30ab3ac] <==
	2025/12/01 20:08:15 Using namespace: kubernetes-dashboard
	2025/12/01 20:08:15 Using in-cluster config to connect to apiserver
	2025/12/01 20:08:15 Using secret token for csrf signing
	2025/12/01 20:08:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/01 20:08:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/01 20:08:15 Successful initial request to the apiserver, version: v1.28.0
	2025/12/01 20:08:15 Generating JWE encryption key
	2025/12/01 20:08:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/01 20:08:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/01 20:08:15 Initializing JWE encryption key from synchronized object
	2025/12/01 20:08:15 Creating in-cluster Sidecar client
	2025/12/01 20:08:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/01 20:08:15 Serving insecurely on HTTP port: 9090
	2025/12/01 20:08:15 Starting overwatch
	
	
	==> storage-provisioner [ecbcc841645dbe266d12b72d95aeff1393b6a4de72113d3f968cd8e953351ccc] <==
	I1201 20:08:00.276079       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1201 20:08:30.281839       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-217464 -n old-k8s-version-217464
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-217464 -n old-k8s-version-217464: exit status 2 (332.777103ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-217464 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-217464
helpers_test.go:243: (dbg) docker inspect old-k8s-version-217464:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9",
	        "Created": "2025-12-01T20:06:33.460541938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 345647,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:07:49.618124237Z",
	            "FinishedAt": "2025-12-01T20:07:46.563763895Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9/hostname",
	        "HostsPath": "/var/lib/docker/containers/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9/hosts",
	        "LogPath": "/var/lib/docker/containers/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9/e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9-json.log",
	        "Name": "/old-k8s-version-217464",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-217464:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-217464",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e59219b4cc96e5c6791ffc37de488b746740a8fa3a04c90cd5efd36df665e6a9",
	                "LowerDir": "/var/lib/docker/overlay2/859c98cc16a359e96cd78ec206ea4e499d484fbb745b17501e76564dce7c678d-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/859c98cc16a359e96cd78ec206ea4e499d484fbb745b17501e76564dce7c678d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/859c98cc16a359e96cd78ec206ea4e499d484fbb745b17501e76564dce7c678d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/859c98cc16a359e96cd78ec206ea4e499d484fbb745b17501e76564dce7c678d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-217464",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-217464/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-217464",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-217464",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-217464",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "54b0ef24bb1d6f5f0e03e3d8496751b6ca51023715f03e394a30adb4b6eacf24",
	            "SandboxKey": "/var/run/docker/netns/54b0ef24bb1d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-217464": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1d25a2ef13e6f84d7dee9dd1a8ffb7c5ebd5713411470cffa733d6c3a1a597a",
	                    "EndpointID": "d64e441cfe130770075eb02e93a72afa372d90d1f7901ccb57fa19805c32e4d5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ce:bb:97:9d:89:de",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-217464",
	                        "e59219b4cc96"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-217464 -n old-k8s-version-217464
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-217464 -n old-k8s-version-217464: exit status 2 (330.889694ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-217464 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-217464 logs -n 25: (1.159634414s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-551864 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │                     │
	│ ssh     │ -p bridge-551864 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo containerd config dump                                                                                                                                                                                                  │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo crio config                                                                                                                                                                                                             │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p bridge-551864                                                                                                                                                                                                                              │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-003720                                                                                                                                                                                                               │ disable-driver-mounts-003720 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-217464 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p old-k8s-version-217464 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-240359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p no-preload-240359 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-990820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p embed-certs-990820 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p no-preload-240359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ image   │ old-k8s-version-217464 image list --format=json                                                                                                                                                                                               │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ pause   │ -p old-k8s-version-217464 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-990820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:08:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:08:28.477537  354303 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:08:28.477626  354303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:28.477632  354303 out.go:374] Setting ErrFile to fd 2...
	I1201 20:08:28.477637  354303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:28.477827  354303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:08:28.478234  354303 out.go:368] Setting JSON to false
	I1201 20:08:28.479648  354303 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6659,"bootTime":1764613049,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:08:28.479727  354303 start.go:143] virtualization: kvm guest
	I1201 20:08:28.481854  354303 out.go:179] * [embed-certs-990820] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:08:28.483774  354303 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:08:28.483785  354303 notify.go:221] Checking for updates...
	I1201 20:08:28.485966  354303 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:08:28.487075  354303 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:28.488125  354303 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:08:28.490714  354303 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:08:28.494461  354303 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:08:28.496133  354303 config.go:182] Loaded profile config "embed-certs-990820": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:08:28.496872  354303 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:08:28.537089  354303 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:08:28.537195  354303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:28.619437  354303 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:08:28.601656972 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:28.619588  354303 docker.go:319] overlay module found
	I1201 20:08:28.623382  354303 out.go:179] * Using the docker driver based on existing profile
	I1201 20:08:28.624440  354303 start.go:309] selected driver: docker
	I1201 20:08:28.624455  354303 start.go:927] validating driver "docker" against &{Name:embed-certs-990820 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-990820 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:28.624559  354303 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:08:28.625273  354303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:28.724819  354303 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:08:28.710117278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:28.725173  354303 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:08:28.725212  354303 cni.go:84] Creating CNI manager for ""
	I1201 20:08:28.725327  354303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:28.725393  354303 start.go:353] cluster config:
	{Name:embed-certs-990820 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-990820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:28.727432  354303 out.go:179] * Starting "embed-certs-990820" primary control-plane node in "embed-certs-990820" cluster
	I1201 20:08:28.728767  354303 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:08:28.729983  354303 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:08:28.731353  354303 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:08:28.731391  354303 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 20:08:28.731399  354303 cache.go:65] Caching tarball of preloaded images
	I1201 20:08:28.731490  354303 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:08:28.731498  354303 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:08:28.731587  354303 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/config.json ...
	I1201 20:08:28.731658  354303 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:08:28.759672  354303 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:08:28.759701  354303 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:08:28.759721  354303 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:08:28.759756  354303 start.go:360] acquireMachinesLock for embed-certs-990820: {Name:mk0308557d4346623fb3193dcae3b8f2c186483d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:08:28.759830  354303 start.go:364] duration metric: took 48.101µs to acquireMachinesLock for "embed-certs-990820"
	I1201 20:08:28.759851  354303 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:08:28.759861  354303 fix.go:54] fixHost starting: 
	I1201 20:08:28.760161  354303 cli_runner.go:164] Run: docker container inspect embed-certs-990820 --format={{.State.Status}}
	I1201 20:08:28.783427  354303 fix.go:112] recreateIfNeeded on embed-certs-990820: state=Stopped err=<nil>
	W1201 20:08:28.783460  354303 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 20:08:27.643530  352497 cli_runner.go:164] Run: docker network inspect no-preload-240359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:08:27.662423  352497 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1201 20:08:27.666732  352497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:08:27.677834  352497 kubeadm.go:884] updating cluster {Name:no-preload-240359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-240359 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:08:27.677959  352497 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:08:27.677993  352497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:08:27.712719  352497 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:08:27.712742  352497 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:08:27.712751  352497 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:08:27.712867  352497 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-240359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-240359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:08:27.712963  352497 ssh_runner.go:195] Run: crio config
	I1201 20:08:27.772580  352497 cni.go:84] Creating CNI manager for ""
	I1201 20:08:27.772666  352497 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:27.772704  352497 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:08:27.772740  352497 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-240359 NodeName:no-preload-240359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:08:27.772885  352497 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-240359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:08:27.772964  352497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:08:27.783386  352497 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:08:27.783477  352497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:08:27.793367  352497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:08:27.808748  352497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:08:27.823123  352497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1201 20:08:27.838818  352497 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:08:27.843068  352497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:08:27.855122  352497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:08:27.958473  352497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:08:27.982144  352497 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359 for IP: 192.168.85.2
	I1201 20:08:27.982163  352497 certs.go:195] generating shared ca certs ...
	I1201 20:08:27.982181  352497 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:27.982340  352497 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:08:27.982401  352497 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:08:27.982414  352497 certs.go:257] generating profile certs ...
	I1201 20:08:27.982519  352497 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/client.key
	I1201 20:08:27.982608  352497 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.key.e236d75c
	I1201 20:08:27.982668  352497 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.key
	I1201 20:08:27.982803  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:08:27.982845  352497 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:08:27.982860  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:08:27.982897  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:08:27.982938  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:08:27.982982  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:08:27.983043  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:08:27.983729  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:08:28.004982  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:08:28.025922  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:08:28.058620  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:08:28.103045  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:08:28.132413  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:08:28.166561  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:08:28.185942  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:08:28.206527  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:08:28.228846  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:08:28.252449  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:08:28.281454  352497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:08:28.298144  352497 ssh_runner.go:195] Run: openssl version
	I1201 20:08:28.305187  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:08:28.314813  352497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:28.318879  352497 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:28.318925  352497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:28.358724  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:08:28.368184  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:08:28.378620  352497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:08:28.382761  352497 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:08:28.382803  352497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:08:28.419263  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:08:28.428910  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:08:28.438671  352497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:08:28.442781  352497 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:08:28.442833  352497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:08:28.482005  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:08:28.491828  352497 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:08:28.496205  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:08:28.556754  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:08:28.617794  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:08:28.678413  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:08:28.740451  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:08:28.796812  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:08:28.855649  352497 kubeadm.go:401] StartCluster: {Name:no-preload-240359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-240359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:28.855725  352497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:08:28.855768  352497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:08:28.892115  352497 cri.go:89] found id: "6b752f5fa5d255e1175b4bd1269edc34ac8b33b4ccd5fd8ef5ee42c1138e4140"
	I1201 20:08:28.892173  352497 cri.go:89] found id: "e49b2d4ba56ef1c2e40ddb43da58758bdbf5d919d3c69e15fb12ddd94e3859e6"
	I1201 20:08:28.892180  352497 cri.go:89] found id: "29cdf919857836c121bb0ca4a31dd8000e82c51bc59f779d45be989f90169f51"
	I1201 20:08:28.892186  352497 cri.go:89] found id: "36005a70764f454efe8261a6e2c055592d11b2995f54692acfa06be75c01e231"
	I1201 20:08:28.892191  352497 cri.go:89] found id: ""
	I1201 20:08:28.892256  352497 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:08:28.913619  352497 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:08:28Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:08:28.913757  352497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:08:28.928853  352497 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:08:28.928874  352497 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:08:28.928929  352497 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:08:28.939353  352497 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:08:28.940034  352497 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-240359" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:28.940417  352497 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-240359" cluster setting kubeconfig missing "no-preload-240359" context setting]
	I1201 20:08:28.940959  352497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:28.942361  352497 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:08:28.954775  352497 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1201 20:08:28.954807  352497 kubeadm.go:602] duration metric: took 25.927718ms to restartPrimaryControlPlane
	I1201 20:08:28.954817  352497 kubeadm.go:403] duration metric: took 99.177392ms to StartCluster
	I1201 20:08:28.954834  352497 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:28.954908  352497 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:28.956103  352497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:28.956326  352497 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:08:28.956456  352497 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:08:28.956582  352497 addons.go:70] Setting storage-provisioner=true in profile "no-preload-240359"
	I1201 20:08:28.956588  352497 config.go:182] Loaded profile config "no-preload-240359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:08:28.956609  352497 addons.go:239] Setting addon storage-provisioner=true in "no-preload-240359"
	I1201 20:08:28.956602  352497 addons.go:70] Setting dashboard=true in profile "no-preload-240359"
	W1201 20:08:28.956619  352497 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:08:28.956628  352497 addons.go:239] Setting addon dashboard=true in "no-preload-240359"
	W1201 20:08:28.956638  352497 addons.go:248] addon dashboard should already be in state true
	I1201 20:08:28.956643  352497 addons.go:70] Setting default-storageclass=true in profile "no-preload-240359"
	I1201 20:08:28.956653  352497 host.go:66] Checking if "no-preload-240359" exists ...
	I1201 20:08:28.956657  352497 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-240359"
	I1201 20:08:28.956668  352497 host.go:66] Checking if "no-preload-240359" exists ...
	I1201 20:08:28.956880  352497 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:08:28.957133  352497 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:08:28.957134  352497 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:08:28.958590  352497 out.go:179] * Verifying Kubernetes components...
	I1201 20:08:28.960227  352497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:08:28.988202  352497 addons.go:239] Setting addon default-storageclass=true in "no-preload-240359"
	W1201 20:08:28.988226  352497 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:08:28.988254  352497 host.go:66] Checking if "no-preload-240359" exists ...
	I1201 20:08:28.988842  352497 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:08:28.989083  352497 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:08:28.989087  352497 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:08:28.990473  352497 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:08:28.990492  352497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:08:28.990714  352497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-240359
	I1201 20:08:28.991820  352497 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:08:28.992947  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1201 20:08:28.992965  352497 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1201 20:08:28.993030  352497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-240359
	I1201 20:08:29.023954  352497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/no-preload-240359/id_rsa Username:docker}
	I1201 20:08:29.025155  352497 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:08:29.025178  352497 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:08:29.025234  352497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-240359
	I1201 20:08:29.026486  352497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/no-preload-240359/id_rsa Username:docker}
	I1201 20:08:29.057718  352497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/no-preload-240359/id_rsa Username:docker}
	I1201 20:08:29.128419  352497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:08:29.143369  352497 node_ready.go:35] waiting up to 6m0s for node "no-preload-240359" to be "Ready" ...
	I1201 20:08:29.152488  352497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:08:29.154955  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1201 20:08:29.154980  352497 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1201 20:08:29.172502  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1201 20:08:29.172524  352497 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1201 20:08:29.176426  352497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:08:29.192638  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1201 20:08:29.192665  352497 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1201 20:08:29.212505  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1201 20:08:29.212528  352497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1201 20:08:29.230832  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1201 20:08:29.230859  352497 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1201 20:08:29.245567  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1201 20:08:29.245596  352497 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1201 20:08:29.261562  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1201 20:08:29.261590  352497 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1201 20:08:29.276797  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1201 20:08:29.276822  352497 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1201 20:08:29.291680  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:08:29.291705  352497 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1201 20:08:29.311777  352497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:08:30.378752  352497 node_ready.go:49] node "no-preload-240359" is "Ready"
	I1201 20:08:30.378782  352497 node_ready.go:38] duration metric: took 1.235379119s for node "no-preload-240359" to be "Ready" ...
	I1201 20:08:30.378798  352497 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:08:30.378863  352497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:08:30.955743  352497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.803218163s)
	I1201 20:08:30.955791  352497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.779336679s)
	I1201 20:08:30.955914  352497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.644092233s)
	I1201 20:08:30.955945  352497 api_server.go:72] duration metric: took 1.999584405s to wait for apiserver process to appear ...
	I1201 20:08:30.955962  352497 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:08:30.955980  352497 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 20:08:30.957331  352497 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-240359 addons enable metrics-server
	
	I1201 20:08:30.960885  352497 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:08:30.960910  352497 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:08:30.963025  352497 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1201 20:08:30.964012  352497 addons.go:530] duration metric: took 2.007568717s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1201 20:08:31.456262  352497 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 20:08:31.463648  352497 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:08:31.463702  352497 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	
	
	==> CRI-O <==
	Dec 01 20:08:15 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:15.307902549Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c05b6a727181822c29a44768573a9421df9ed62dfa43e6a766784f7c77692d9b/merged/etc/group: no such file or directory"
	Dec 01 20:08:15 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:15.308330663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:15 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:15.338414366Z" level=info msg="Created container e526c3a481a4f37409004fed70aaf4eed35df27e652d4df6f5e79f21b30ab3ac: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pfxh9/kubernetes-dashboard" id=e393f827-0074-4470-bf37-a46389eccb7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:15 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:15.338963619Z" level=info msg="Starting container: e526c3a481a4f37409004fed70aaf4eed35df27e652d4df6f5e79f21b30ab3ac" id=3791e3c1-a8f3-43ff-a8cf-a1141309ea71 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:08:15 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:15.340721099Z" level=info msg="Started container" PID=1525 containerID=e526c3a481a4f37409004fed70aaf4eed35df27e652d4df6f5e79f21b30ab3ac description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pfxh9/kubernetes-dashboard id=3791e3c1-a8f3-43ff-a8cf-a1141309ea71 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a1fe0b0e6512df10b1c36eea59afcd8fc1d633405b73eb3845bbd65e57b6878f
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.365369661Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=92f8635d-5d72-436e-85be-7bd6cd9f99e1 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.36614855Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67f863a5-32f9-4e42-9652-6a9f73ed73b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.368527672Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper" id=182d18da-802d-48a2-8337-fdefe428a709 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.36863344Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.37516729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.375699343Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.406818418Z" level=info msg="Created container 83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper" id=182d18da-802d-48a2-8337-fdefe428a709 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.407463108Z" level=info msg="Starting container: 83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed" id=058fc660-19fb-436a-9220-f03a4cff4be8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.409099646Z" level=info msg="Started container" PID=1753 containerID=83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper id=058fc660-19fb-436a-9220-f03a4cff4be8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b40270d9a95551ea930bbba4286b608e79ac69c117aace0a6f499d28324fc76
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.974807646Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=261f8d14-bf0e-424e-9f45-657aa077b925 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.977698995Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a4b7bb23-ad40-448a-a555-a62b0d86c566 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.980913418Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper" id=dec29c36-0ea3-4ce1-a16f-9b2735f34c7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.981044369Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.988732739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:17 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:17.989532789Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:18 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:18.015409297Z" level=info msg="Created container 2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper" id=dec29c36-0ea3-4ce1-a16f-9b2735f34c7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:18 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:18.015942647Z" level=info msg="Starting container: 2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285" id=c015908c-2313-44de-862c-7a2bf0807e92 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:08:18 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:18.017981083Z" level=info msg="Started container" PID=1764 containerID=2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper id=c015908c-2313-44de-862c-7a2bf0807e92 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b40270d9a95551ea930bbba4286b608e79ac69c117aace0a6f499d28324fc76
	Dec 01 20:08:18 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:18.979807208Z" level=info msg="Removing container: 83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed" id=c2f6733c-72cd-41bb-b3a0-8e39694ed95d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:08:18 old-k8s-version-217464 crio[569]: time="2025-12-01T20:08:18.988642063Z" level=info msg="Removed container 83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq/dashboard-metrics-scraper" id=c2f6733c-72cd-41bb-b3a0-8e39694ed95d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2136aec85187f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   1                   0b40270d9a955       dashboard-metrics-scraper-5f989dc9cf-gj8zq       kubernetes-dashboard
	e526c3a481a4f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   17 seconds ago      Running             kubernetes-dashboard        0                   a1fe0b0e6512d       kubernetes-dashboard-8694d4445c-pfxh9            kubernetes-dashboard
	4dd51eba90da2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           32 seconds ago      Running             coredns                     0                   b1c4501b4eaf9       coredns-5dd5756b68-jpv6h                         kube-system
	95bd7159ce5a2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           32 seconds ago      Running             busybox                     1                   24b980a04d088       busybox                                          default
	5abb611d7e9f7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           32 seconds ago      Running             kindnet-cni                 0                   0e464b5f0ea3d       kindnet-x9tkl                                    kube-system
	36d96bfb7d932       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           32 seconds ago      Running             kube-proxy                  0                   d0f1bf9bee9d1       kube-proxy-fjhhh                                 kube-system
	ecbcc841645db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           32 seconds ago      Exited              storage-provisioner         0                   f29924e3a3176       storage-provisioner                              kube-system
	9d50552004acc       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           36 seconds ago      Running             etcd                        0                   037331d240a2d       etcd-old-k8s-version-217464                      kube-system
	50a711978543f       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           36 seconds ago      Running             kube-scheduler              0                   b75d4ab87e199       kube-scheduler-old-k8s-version-217464            kube-system
	4649c73be5eb9       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           36 seconds ago      Running             kube-apiserver              0                   d4d22b070698f       kube-apiserver-old-k8s-version-217464            kube-system
	604c30dbad503       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           36 seconds ago      Running             kube-controller-manager     0                   e397ada9f06c4       kube-controller-manager-old-k8s-version-217464   kube-system
	
	
	==> coredns [4dd51eba90da2b0039140bea7a61cb891af1da1e61a21339fb4f21afc50fb187] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38871 - 14814 "HINFO IN 5245048074702711661.8598468947235307428. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029405188s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-217464
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-217464
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=old-k8s-version-217464
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_06_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:06:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-217464
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:08:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:07:59 +0000   Mon, 01 Dec 2025 20:06:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:07:59 +0000   Mon, 01 Dec 2025 20:06:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:07:59 +0000   Mon, 01 Dec 2025 20:06:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:07:59 +0000   Mon, 01 Dec 2025 20:07:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-217464
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                ed847e9c-b6d4-4f47-a0ed-41ae4070a3c6
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 coredns-5dd5756b68-jpv6h                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     91s
	  kube-system                 etcd-old-k8s-version-217464                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         104s
	  kube-system                 kindnet-x9tkl                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      91s
	  kube-system                 kube-apiserver-old-k8s-version-217464             250m (3%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-controller-manager-old-k8s-version-217464    200m (2%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-proxy-fjhhh                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-old-k8s-version-217464             100m (1%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-gj8zq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-pfxh9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  109s (x8 over 110s)  kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x8 over 110s)  kubelet          Node old-k8s-version-217464 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x8 over 110s)  kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     104s                 kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  104s                 kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s                 kubelet          Node old-k8s-version-217464 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 104s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           92s                  node-controller  Node old-k8s-version-217464 event: Registered Node old-k8s-version-217464 in Controller
	  Normal  NodeReady                78s                  kubelet          Node old-k8s-version-217464 status is now: NodeReady
	  Normal  Starting                 38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)    kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)    kubelet          Node old-k8s-version-217464 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 38s)    kubelet          Node old-k8s-version-217464 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s                  node-controller  Node old-k8s-version-217464 event: Registered Node old-k8s-version-217464 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [9d50552004acc3398e94380698eb07bb142aa3e02f7fbe0cc985eae7f0f37421] <==
	{"level":"info","ts":"2025-12-01T20:07:56.475401Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-01T20:07:56.475492Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-01T20:07:56.476089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-01T20:07:56.476273Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-01T20:07:56.476592Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-01T20:07:56.476663Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-01T20:07:56.477493Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-01T20:07:56.477618Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-01T20:07:56.479743Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-01T20:07:56.477739Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-01T20:07:56.477777Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-01T20:07:58.265913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-01T20:07:58.265957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-01T20:07:58.265972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-01T20:07:58.265985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-01T20:07:58.265991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-01T20:07:58.265999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-01T20:07:58.266006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-01T20:07:58.267181Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-01T20:07:58.267191Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-217464 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-01T20:07:58.267201Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-01T20:07:58.267439Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-01T20:07:58.267484Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-01T20:07:58.268519Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-01T20:07:58.268514Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 20:08:33 up  1:51,  0 user,  load average: 3.47, 3.16, 2.26
	Linux old-k8s-version-217464 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5abb611d7e9f7dcb22802c91729f1a178ed173a48b43625dc06b56faba224150] <==
	I1201 20:08:00.555615       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:08:00.555943       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1201 20:08:00.556146       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:08:00.556168       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:08:00.556193       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:08:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:08:00.758321       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:08:00.758421       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:08:00.758434       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:08:00.758561       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:08:01.158497       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:08:01.158523       1 metrics.go:72] Registering metrics
	I1201 20:08:01.158589       1 controller.go:711] "Syncing nftables rules"
	I1201 20:08:10.668395       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1201 20:08:10.668449       1 main.go:301] handling current node
	I1201 20:08:20.666703       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1201 20:08:20.666755       1 main.go:301] handling current node
	I1201 20:08:30.675553       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1201 20:08:30.675593       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4649c73be5eb94a99d98990312bb2e4e017cd402e18aca29e4f14aacf404c25f] <==
	I1201 20:07:59.344190       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1201 20:07:59.344328       1 shared_informer.go:318] Caches are synced for configmaps
	I1201 20:07:59.344410       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1201 20:07:59.345538       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1201 20:07:59.346781       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1201 20:07:59.346852       1 aggregator.go:166] initial CRD sync complete...
	I1201 20:07:59.346866       1 autoregister_controller.go:141] Starting autoregister controller
	I1201 20:07:59.346873       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:07:59.346880       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:07:59.349070       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1201 20:07:59.355145       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1201 20:07:59.370943       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:08:00.216707       1 controller.go:624] quota admission added evaluator for: namespaces
	I1201 20:08:00.253892       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:08:00.292666       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1201 20:08:00.315118       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:08:00.324546       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:08:00.332330       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1201 20:08:00.384667       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.142.205"}
	I1201 20:08:00.405249       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.107.8"}
	I1201 20:08:11.526587       1 controller.go:624] quota admission added evaluator for: endpoints
	I1201 20:08:11.526648       1 controller.go:624] quota admission added evaluator for: endpoints
	I1201 20:08:11.550916       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:08:11.550918       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:08:11.574056       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [604c30dbad503e870547eb7624c394a7a220a65ecf82f3dccc6f24eca1a93428] <==
	I1201 20:08:11.605037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="27.741451ms"
	I1201 20:08:11.606570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.773699ms"
	I1201 20:08:11.606688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.147µs"
	I1201 20:08:11.609940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="77.181µs"
	I1201 20:08:11.610382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="4.938557ms"
	I1201 20:08:11.610495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.393µs"
	I1201 20:08:11.610544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="31.434µs"
	I1201 20:08:11.617600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.253µs"
	I1201 20:08:11.622822       1 shared_informer.go:318] Caches are synced for PVC protection
	I1201 20:08:11.622837       1 shared_informer.go:318] Caches are synced for persistent volume
	I1201 20:08:11.631052       1 shared_informer.go:318] Caches are synced for attach detach
	I1201 20:08:11.633314       1 shared_informer.go:318] Caches are synced for expand
	I1201 20:08:11.652719       1 shared_informer.go:318] Caches are synced for namespace
	I1201 20:08:11.686563       1 shared_informer.go:318] Caches are synced for ephemeral
	I1201 20:08:11.697956       1 shared_informer.go:318] Caches are synced for stateful set
	I1201 20:08:11.760561       1 shared_informer.go:318] Caches are synced for resource quota
	I1201 20:08:11.770818       1 shared_informer.go:318] Caches are synced for resource quota
	I1201 20:08:12.096605       1 shared_informer.go:318] Caches are synced for garbage collector
	I1201 20:08:12.159856       1 shared_informer.go:318] Caches are synced for garbage collector
	I1201 20:08:12.159895       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1201 20:08:15.996980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.895102ms"
	I1201 20:08:15.997095       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.735µs"
	I1201 20:08:17.987552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.243µs"
	I1201 20:08:18.989854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.261µs"
	I1201 20:08:19.992171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.077µs"
	
	
	==> kube-proxy [36d96bfb7d9320bba36f604a96f0cf8192ff4654d2d9ddf86407363967e92dbe] <==
	I1201 20:08:00.327851       1 server_others.go:69] "Using iptables proxy"
	I1201 20:08:00.338486       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1201 20:08:00.381700       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:08:00.385341       1 server_others.go:152] "Using iptables Proxier"
	I1201 20:08:00.385385       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1201 20:08:00.385395       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1201 20:08:00.385434       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1201 20:08:00.385769       1 server.go:846] "Version info" version="v1.28.0"
	I1201 20:08:00.385788       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:08:00.387584       1 config.go:97] "Starting endpoint slice config controller"
	I1201 20:08:00.387614       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1201 20:08:00.387649       1 config.go:188] "Starting service config controller"
	I1201 20:08:00.387654       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1201 20:08:00.388227       1 config.go:315] "Starting node config controller"
	I1201 20:08:00.388248       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1201 20:08:00.487936       1 shared_informer.go:318] Caches are synced for service config
	I1201 20:08:00.487980       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1201 20:08:00.488584       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [50a711978543faddbcd266e3bb43a6bebfd689f26e2a35fcfedb4e228ede9591] <==
	I1201 20:07:56.957257       1 serving.go:348] Generated self-signed cert in-memory
	I1201 20:07:59.321860       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1201 20:07:59.321883       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:07:59.327734       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1201 20:07:59.330397       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1201 20:07:59.332409       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1201 20:07:59.330435       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:07:59.333538       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1201 20:07:59.330457       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1201 20:07:59.334344       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1201 20:07:59.330475       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1201 20:07:59.434477       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1201 20:07:59.434478       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1201 20:07:59.434478       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 01 20:07:59 old-k8s-version-217464 kubelet[732]: I1201 20:07:59.975075     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12564231-f1d8-4991-b32e-478ee1e61837-xtables-lock\") pod \"kube-proxy-fjhhh\" (UID: \"12564231-f1d8-4991-b32e-478ee1e61837\") " pod="kube-system/kube-proxy-fjhhh"
	Dec 01 20:07:59 old-k8s-version-217464 kubelet[732]: I1201 20:07:59.975739     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12564231-f1d8-4991-b32e-478ee1e61837-lib-modules\") pod \"kube-proxy-fjhhh\" (UID: \"12564231-f1d8-4991-b32e-478ee1e61837\") " pod="kube-system/kube-proxy-fjhhh"
	Dec 01 20:07:59 old-k8s-version-217464 kubelet[732]: I1201 20:07:59.975858     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baa3c072-c4e8-4d7c-ad9f-7ee7461ea900-xtables-lock\") pod \"kindnet-x9tkl\" (UID: \"baa3c072-c4e8-4d7c-ad9f-7ee7461ea900\") " pod="kube-system/kindnet-x9tkl"
	Dec 01 20:07:59 old-k8s-version-217464 kubelet[732]: I1201 20:07:59.975890     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baa3c072-c4e8-4d7c-ad9f-7ee7461ea900-lib-modules\") pod \"kindnet-x9tkl\" (UID: \"baa3c072-c4e8-4d7c-ad9f-7ee7461ea900\") " pod="kube-system/kindnet-x9tkl"
	Dec 01 20:08:08 old-k8s-version-217464 kubelet[732]: I1201 20:08:08.212922     732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 01 20:08:11 old-k8s-version-217464 kubelet[732]: I1201 20:08:11.598224     732 topology_manager.go:215] "Topology Admit Handler" podUID="c9b2eed4-9d0b-4f54-8c25-d864a3b6f855" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-pfxh9"
	Dec 01 20:08:11 old-k8s-version-217464 kubelet[732]: I1201 20:08:11.602477     732 topology_manager.go:215] "Topology Admit Handler" podUID="5f9b023f-08b3-40cf-9ad7-21b541515595" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-gj8zq"
	Dec 01 20:08:11 old-k8s-version-217464 kubelet[732]: I1201 20:08:11.737760     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c9b2eed4-9d0b-4f54-8c25-d864a3b6f855-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-pfxh9\" (UID: \"c9b2eed4-9d0b-4f54-8c25-d864a3b6f855\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pfxh9"
	Dec 01 20:08:11 old-k8s-version-217464 kubelet[732]: I1201 20:08:11.737820     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74cck\" (UniqueName: \"kubernetes.io/projected/c9b2eed4-9d0b-4f54-8c25-d864a3b6f855-kube-api-access-74cck\") pod \"kubernetes-dashboard-8694d4445c-pfxh9\" (UID: \"c9b2eed4-9d0b-4f54-8c25-d864a3b6f855\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pfxh9"
	Dec 01 20:08:11 old-k8s-version-217464 kubelet[732]: I1201 20:08:11.737961     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5f9b023f-08b3-40cf-9ad7-21b541515595-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-gj8zq\" (UID: \"5f9b023f-08b3-40cf-9ad7-21b541515595\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq"
	Dec 01 20:08:11 old-k8s-version-217464 kubelet[732]: I1201 20:08:11.738015     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hzf6\" (UniqueName: \"kubernetes.io/projected/5f9b023f-08b3-40cf-9ad7-21b541515595-kube-api-access-7hzf6\") pod \"dashboard-metrics-scraper-5f989dc9cf-gj8zq\" (UID: \"5f9b023f-08b3-40cf-9ad7-21b541515595\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq"
	Dec 01 20:08:15 old-k8s-version-217464 kubelet[732]: I1201 20:08:15.982204     732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pfxh9" podStartSLOduration=1.602993443 podCreationTimestamp="2025-12-01 20:08:11 +0000 UTC" firstStartedPulling="2025-12-01 20:08:11.92095908 +0000 UTC m=+16.124537789" lastFinishedPulling="2025-12-01 20:08:15.300095914 +0000 UTC m=+19.503674620" observedRunningTime="2025-12-01 20:08:15.981929135 +0000 UTC m=+20.185507849" watchObservedRunningTime="2025-12-01 20:08:15.982130274 +0000 UTC m=+20.185708990"
	Dec 01 20:08:17 old-k8s-version-217464 kubelet[732]: I1201 20:08:17.974363     732 scope.go:117] "RemoveContainer" containerID="83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed"
	Dec 01 20:08:18 old-k8s-version-217464 kubelet[732]: I1201 20:08:18.978498     732 scope.go:117] "RemoveContainer" containerID="83c14d72d7349b8845089cfc0bfd3bd3c4cc26502db5e6df626c9db5f6c048ed"
	Dec 01 20:08:18 old-k8s-version-217464 kubelet[732]: I1201 20:08:18.978622     732 scope.go:117] "RemoveContainer" containerID="2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285"
	Dec 01 20:08:18 old-k8s-version-217464 kubelet[732]: E1201 20:08:18.978983     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gj8zq_kubernetes-dashboard(5f9b023f-08b3-40cf-9ad7-21b541515595)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq" podUID="5f9b023f-08b3-40cf-9ad7-21b541515595"
	Dec 01 20:08:19 old-k8s-version-217464 kubelet[732]: I1201 20:08:19.982448     732 scope.go:117] "RemoveContainer" containerID="2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285"
	Dec 01 20:08:19 old-k8s-version-217464 kubelet[732]: E1201 20:08:19.982745     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gj8zq_kubernetes-dashboard(5f9b023f-08b3-40cf-9ad7-21b541515595)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq" podUID="5f9b023f-08b3-40cf-9ad7-21b541515595"
	Dec 01 20:08:21 old-k8s-version-217464 kubelet[732]: I1201 20:08:21.905066     732 scope.go:117] "RemoveContainer" containerID="2136aec85187fc5918968a6aaf10b0e35a6836e5b8daed66b88f3b23edfc4285"
	Dec 01 20:08:21 old-k8s-version-217464 kubelet[732]: E1201 20:08:21.905488     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gj8zq_kubernetes-dashboard(5f9b023f-08b3-40cf-9ad7-21b541515595)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gj8zq" podUID="5f9b023f-08b3-40cf-9ad7-21b541515595"
	Dec 01 20:08:28 old-k8s-version-217464 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 20:08:28 old-k8s-version-217464 kubelet[732]: I1201 20:08:28.684722     732 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 01 20:08:28 old-k8s-version-217464 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 20:08:28 old-k8s-version-217464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 20:08:28 old-k8s-version-217464 systemd[1]: kubelet.service: Consumed 1.135s CPU time.
	
	
	==> kubernetes-dashboard [e526c3a481a4f37409004fed70aaf4eed35df27e652d4df6f5e79f21b30ab3ac] <==
	2025/12/01 20:08:15 Using namespace: kubernetes-dashboard
	2025/12/01 20:08:15 Using in-cluster config to connect to apiserver
	2025/12/01 20:08:15 Using secret token for csrf signing
	2025/12/01 20:08:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/01 20:08:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/01 20:08:15 Successful initial request to the apiserver, version: v1.28.0
	2025/12/01 20:08:15 Generating JWE encryption key
	2025/12/01 20:08:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/01 20:08:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/01 20:08:15 Initializing JWE encryption key from synchronized object
	2025/12/01 20:08:15 Creating in-cluster Sidecar client
	2025/12/01 20:08:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/01 20:08:15 Serving insecurely on HTTP port: 9090
	2025/12/01 20:08:15 Starting overwatch
	
	
	==> storage-provisioner [ecbcc841645dbe266d12b72d95aeff1393b6a4de72113d3f968cd8e953351ccc] <==
	I1201 20:08:00.276079       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1201 20:08:30.281839       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-217464 -n old-k8s-version-217464
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-217464 -n old-k8s-version-217464: exit status 2 (339.92035ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-217464 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (389.816594ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:08:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-009682 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-009682 describe deploy/metrics-server -n kube-system: exit status 1 (99.16029ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-009682 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-009682
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-009682:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb",
	        "Created": "2025-12-01T20:07:49.041220039Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 344919,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:07:49.083951573Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb/hosts",
	        "LogPath": "/var/lib/docker/containers/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb-json.log",
	        "Name": "/default-k8s-diff-port-009682",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-009682:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-009682",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb",
	                "LowerDir": "/var/lib/docker/overlay2/c672d918b52b353bd2ee1692962048a77130b297aef1e84b0420ee8646b9541b-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c672d918b52b353bd2ee1692962048a77130b297aef1e84b0420ee8646b9541b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c672d918b52b353bd2ee1692962048a77130b297aef1e84b0420ee8646b9541b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c672d918b52b353bd2ee1692962048a77130b297aef1e84b0420ee8646b9541b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-009682",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-009682/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-009682",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-009682",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-009682",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9e48740d815568b60ae4776b1c69d0cb21fdaa2e15cc24b9dd19fe80f9816adb",
	            "SandboxKey": "/var/run/docker/netns/9e48740d8155",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-009682": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ae21c1908b572396f83bd86ca68adf4c8b9646d28fbd4ac53d2a1a3af1c0eae4",
	                    "EndpointID": "87871c3c014f14e509c27e8aeb16aefc31633f5cc13fb4554b19a095c35754b7",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "fe:44:89:9c:42:e2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-009682",
	                        "0b0f250c2430"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-009682 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-009682 logs -n 25: (1.785472877s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-551864 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo containerd config dump                                                                                                                                                                                                  │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo crio config                                                                                                                                                                                                             │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p bridge-551864                                                                                                                                                                                                                              │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-003720                                                                                                                                                                                                               │ disable-driver-mounts-003720 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-217464 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p old-k8s-version-217464 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-240359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p no-preload-240359 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-990820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p embed-certs-990820 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p no-preload-240359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ image   │ old-k8s-version-217464 image list --format=json                                                                                                                                                                                               │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ pause   │ -p old-k8s-version-217464 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-990820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                     │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:08:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:08:28.477537  354303 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:08:28.477626  354303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:28.477632  354303 out.go:374] Setting ErrFile to fd 2...
	I1201 20:08:28.477637  354303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:28.477827  354303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:08:28.478234  354303 out.go:368] Setting JSON to false
	I1201 20:08:28.479648  354303 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6659,"bootTime":1764613049,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:08:28.479727  354303 start.go:143] virtualization: kvm guest
	I1201 20:08:28.481854  354303 out.go:179] * [embed-certs-990820] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:08:28.483774  354303 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:08:28.483785  354303 notify.go:221] Checking for updates...
	I1201 20:08:28.485966  354303 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:08:28.487075  354303 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:28.488125  354303 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:08:28.490714  354303 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:08:28.494461  354303 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:08:28.496133  354303 config.go:182] Loaded profile config "embed-certs-990820": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:08:28.496872  354303 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:08:28.537089  354303 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:08:28.537195  354303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:28.619437  354303 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:08:28.601656972 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:28.619588  354303 docker.go:319] overlay module found
	I1201 20:08:28.623382  354303 out.go:179] * Using the docker driver based on existing profile
	I1201 20:08:28.624440  354303 start.go:309] selected driver: docker
	I1201 20:08:28.624455  354303 start.go:927] validating driver "docker" against &{Name:embed-certs-990820 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-990820 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:28.624559  354303 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:08:28.625273  354303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:28.724819  354303 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:08:28.710117278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:28.725173  354303 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:08:28.725212  354303 cni.go:84] Creating CNI manager for ""
	I1201 20:08:28.725327  354303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:28.725393  354303 start.go:353] cluster config:
	{Name:embed-certs-990820 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-990820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:28.727432  354303 out.go:179] * Starting "embed-certs-990820" primary control-plane node in "embed-certs-990820" cluster
	I1201 20:08:28.728767  354303 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:08:28.729983  354303 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:08:28.731353  354303 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:08:28.731391  354303 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 20:08:28.731399  354303 cache.go:65] Caching tarball of preloaded images
	I1201 20:08:28.731490  354303 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:08:28.731498  354303 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:08:28.731587  354303 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/config.json ...
	I1201 20:08:28.731658  354303 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:08:28.759672  354303 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:08:28.759701  354303 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:08:28.759721  354303 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:08:28.759756  354303 start.go:360] acquireMachinesLock for embed-certs-990820: {Name:mk0308557d4346623fb3193dcae3b8f2c186483d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:08:28.759830  354303 start.go:364] duration metric: took 48.101µs to acquireMachinesLock for "embed-certs-990820"
	I1201 20:08:28.759851  354303 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:08:28.759861  354303 fix.go:54] fixHost starting: 
	I1201 20:08:28.760161  354303 cli_runner.go:164] Run: docker container inspect embed-certs-990820 --format={{.State.Status}}
	I1201 20:08:28.783427  354303 fix.go:112] recreateIfNeeded on embed-certs-990820: state=Stopped err=<nil>
	W1201 20:08:28.783460  354303 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 20:08:27.643530  352497 cli_runner.go:164] Run: docker network inspect no-preload-240359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:08:27.662423  352497 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1201 20:08:27.666732  352497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:08:27.677834  352497 kubeadm.go:884] updating cluster {Name:no-preload-240359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-240359 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:08:27.677959  352497 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:08:27.677993  352497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:08:27.712719  352497 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:08:27.712742  352497 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:08:27.712751  352497 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:08:27.712867  352497 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-240359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-240359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:08:27.712963  352497 ssh_runner.go:195] Run: crio config
	I1201 20:08:27.772580  352497 cni.go:84] Creating CNI manager for ""
	I1201 20:08:27.772666  352497 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:27.772704  352497 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:08:27.772740  352497 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-240359 NodeName:no-preload-240359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:08:27.772885  352497 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-240359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:08:27.772964  352497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:08:27.783386  352497 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:08:27.783477  352497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:08:27.793367  352497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:08:27.808748  352497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:08:27.823123  352497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1201 20:08:27.838818  352497 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:08:27.843068  352497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:08:27.855122  352497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:08:27.958473  352497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:08:27.982144  352497 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359 for IP: 192.168.85.2
	I1201 20:08:27.982163  352497 certs.go:195] generating shared ca certs ...
	I1201 20:08:27.982181  352497 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:27.982340  352497 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:08:27.982401  352497 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:08:27.982414  352497 certs.go:257] generating profile certs ...
	I1201 20:08:27.982519  352497 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/client.key
	I1201 20:08:27.982608  352497 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.key.e236d75c
	I1201 20:08:27.982668  352497 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.key
	I1201 20:08:27.982803  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:08:27.982845  352497 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:08:27.982860  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:08:27.982897  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:08:27.982938  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:08:27.982982  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:08:27.983043  352497 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:08:27.983729  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:08:28.004982  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:08:28.025922  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:08:28.058620  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:08:28.103045  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:08:28.132413  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:08:28.166561  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:08:28.185942  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/no-preload-240359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:08:28.206527  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:08:28.228846  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:08:28.252449  352497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:08:28.281454  352497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:08:28.298144  352497 ssh_runner.go:195] Run: openssl version
	I1201 20:08:28.305187  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:08:28.314813  352497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:28.318879  352497 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:28.318925  352497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:28.358724  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:08:28.368184  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:08:28.378620  352497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:08:28.382761  352497 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:08:28.382803  352497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:08:28.419263  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:08:28.428910  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:08:28.438671  352497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:08:28.442781  352497 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:08:28.442833  352497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:08:28.482005  352497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:08:28.491828  352497 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:08:28.496205  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:08:28.556754  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:08:28.617794  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:08:28.678413  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:08:28.740451  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:08:28.796812  352497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:08:28.855649  352497 kubeadm.go:401] StartCluster: {Name:no-preload-240359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-240359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:28.855725  352497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:08:28.855768  352497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:08:28.892115  352497 cri.go:89] found id: "6b752f5fa5d255e1175b4bd1269edc34ac8b33b4ccd5fd8ef5ee42c1138e4140"
	I1201 20:08:28.892173  352497 cri.go:89] found id: "e49b2d4ba56ef1c2e40ddb43da58758bdbf5d919d3c69e15fb12ddd94e3859e6"
	I1201 20:08:28.892180  352497 cri.go:89] found id: "29cdf919857836c121bb0ca4a31dd8000e82c51bc59f779d45be989f90169f51"
	I1201 20:08:28.892186  352497 cri.go:89] found id: "36005a70764f454efe8261a6e2c055592d11b2995f54692acfa06be75c01e231"
	I1201 20:08:28.892191  352497 cri.go:89] found id: ""
	I1201 20:08:28.892256  352497 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:08:28.913619  352497 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:08:28Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:08:28.913757  352497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:08:28.928853  352497 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:08:28.928874  352497 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:08:28.928929  352497 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:08:28.939353  352497 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:08:28.940034  352497 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-240359" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:28.940417  352497 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-240359" cluster setting kubeconfig missing "no-preload-240359" context setting]
	I1201 20:08:28.940959  352497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:28.942361  352497 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:08:28.954775  352497 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1201 20:08:28.954807  352497 kubeadm.go:602] duration metric: took 25.927718ms to restartPrimaryControlPlane
	I1201 20:08:28.954817  352497 kubeadm.go:403] duration metric: took 99.177392ms to StartCluster
	I1201 20:08:28.954834  352497 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:28.954908  352497 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:28.956103  352497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:28.956326  352497 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:08:28.956456  352497 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:08:28.956582  352497 addons.go:70] Setting storage-provisioner=true in profile "no-preload-240359"
	I1201 20:08:28.956588  352497 config.go:182] Loaded profile config "no-preload-240359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:08:28.956609  352497 addons.go:239] Setting addon storage-provisioner=true in "no-preload-240359"
	I1201 20:08:28.956602  352497 addons.go:70] Setting dashboard=true in profile "no-preload-240359"
	W1201 20:08:28.956619  352497 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:08:28.956628  352497 addons.go:239] Setting addon dashboard=true in "no-preload-240359"
	W1201 20:08:28.956638  352497 addons.go:248] addon dashboard should already be in state true
	I1201 20:08:28.956643  352497 addons.go:70] Setting default-storageclass=true in profile "no-preload-240359"
	I1201 20:08:28.956653  352497 host.go:66] Checking if "no-preload-240359" exists ...
	I1201 20:08:28.956657  352497 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-240359"
	I1201 20:08:28.956668  352497 host.go:66] Checking if "no-preload-240359" exists ...
	I1201 20:08:28.956880  352497 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:08:28.957133  352497 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:08:28.957134  352497 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:08:28.958590  352497 out.go:179] * Verifying Kubernetes components...
	I1201 20:08:28.960227  352497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:08:28.988202  352497 addons.go:239] Setting addon default-storageclass=true in "no-preload-240359"
	W1201 20:08:28.988226  352497 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:08:28.988254  352497 host.go:66] Checking if "no-preload-240359" exists ...
	I1201 20:08:28.988842  352497 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:08:28.989083  352497 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:08:28.989087  352497 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:08:28.990473  352497 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:08:28.990492  352497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:08:28.990714  352497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-240359
	I1201 20:08:28.991820  352497 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:08:28.992947  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1201 20:08:28.992965  352497 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1201 20:08:28.993030  352497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-240359
	I1201 20:08:29.023954  352497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/no-preload-240359/id_rsa Username:docker}
	I1201 20:08:29.025155  352497 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:08:29.025178  352497 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:08:29.025234  352497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-240359
	I1201 20:08:29.026486  352497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/no-preload-240359/id_rsa Username:docker}
	I1201 20:08:29.057718  352497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/no-preload-240359/id_rsa Username:docker}
	I1201 20:08:29.128419  352497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:08:29.143369  352497 node_ready.go:35] waiting up to 6m0s for node "no-preload-240359" to be "Ready" ...
	I1201 20:08:29.152488  352497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:08:29.154955  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1201 20:08:29.154980  352497 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1201 20:08:29.172502  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1201 20:08:29.172524  352497 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1201 20:08:29.176426  352497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:08:29.192638  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1201 20:08:29.192665  352497 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1201 20:08:29.212505  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1201 20:08:29.212528  352497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1201 20:08:29.230832  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1201 20:08:29.230859  352497 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1201 20:08:29.245567  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1201 20:08:29.245596  352497 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1201 20:08:29.261562  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1201 20:08:29.261590  352497 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1201 20:08:29.276797  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1201 20:08:29.276822  352497 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1201 20:08:29.291680  352497 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:08:29.291705  352497 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1201 20:08:29.311777  352497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:08:30.378752  352497 node_ready.go:49] node "no-preload-240359" is "Ready"
	I1201 20:08:30.378782  352497 node_ready.go:38] duration metric: took 1.235379119s for node "no-preload-240359" to be "Ready" ...
	I1201 20:08:30.378798  352497 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:08:30.378863  352497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:08:30.955743  352497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.803218163s)
	I1201 20:08:30.955791  352497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.779336679s)
	I1201 20:08:30.955914  352497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.644092233s)
	I1201 20:08:30.955945  352497 api_server.go:72] duration metric: took 1.999584405s to wait for apiserver process to appear ...
	I1201 20:08:30.955962  352497 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:08:30.955980  352497 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 20:08:30.957331  352497 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-240359 addons enable metrics-server
	
	I1201 20:08:30.960885  352497 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:08:30.960910  352497 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:08:30.963025  352497 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1201 20:08:30.964012  352497 addons.go:530] duration metric: took 2.007568717s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1201 20:08:31.456262  352497 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 20:08:31.463648  352497 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:08:31.463702  352497 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:08:28.785131  354303 out.go:252] * Restarting existing docker container for "embed-certs-990820" ...
	I1201 20:08:28.785224  354303 cli_runner.go:164] Run: docker start embed-certs-990820
	I1201 20:08:29.145466  354303 cli_runner.go:164] Run: docker container inspect embed-certs-990820 --format={{.State.Status}}
	I1201 20:08:29.169367  354303 kic.go:430] container "embed-certs-990820" state is running.
	I1201 20:08:29.170042  354303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-990820
	I1201 20:08:29.198669  354303 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/config.json ...
	I1201 20:08:29.198946  354303 machine.go:94] provisionDockerMachine start ...
	I1201 20:08:29.199031  354303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-990820
	I1201 20:08:29.223915  354303 main.go:143] libmachine: Using SSH client type: native
	I1201 20:08:29.224217  354303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1201 20:08:29.224236  354303 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:08:29.225019  354303 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39236->127.0.0.1:33123: read: connection reset by peer
	I1201 20:08:32.371246  354303 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-990820
	
	I1201 20:08:32.371273  354303 ubuntu.go:182] provisioning hostname "embed-certs-990820"
	I1201 20:08:32.371367  354303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-990820
	I1201 20:08:32.396255  354303 main.go:143] libmachine: Using SSH client type: native
	I1201 20:08:32.396580  354303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1201 20:08:32.396607  354303 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-990820 && echo "embed-certs-990820" | sudo tee /etc/hostname
	I1201 20:08:32.557412  354303 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-990820
	
	I1201 20:08:32.557486  354303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-990820
	I1201 20:08:32.576613  354303 main.go:143] libmachine: Using SSH client type: native
	I1201 20:08:32.576891  354303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1201 20:08:32.576922  354303 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-990820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-990820/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-990820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:08:32.720190  354303 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:08:32.720220  354303 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:08:32.720256  354303 ubuntu.go:190] setting up certificates
	I1201 20:08:32.720269  354303 provision.go:84] configureAuth start
	I1201 20:08:32.720370  354303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-990820
	I1201 20:08:32.742781  354303 provision.go:143] copyHostCerts
	I1201 20:08:32.742843  354303 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:08:32.742862  354303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:08:32.742947  354303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:08:32.743586  354303 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:08:32.743605  354303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:08:32.743661  354303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:08:32.743765  354303 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:08:32.743776  354303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:08:32.743817  354303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:08:32.743941  354303 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.embed-certs-990820 san=[127.0.0.1 192.168.94.2 embed-certs-990820 localhost minikube]
	I1201 20:08:32.825928  354303 provision.go:177] copyRemoteCerts
	I1201 20:08:32.825990  354303 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:08:32.826060  354303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-990820
	I1201 20:08:32.849626  354303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/embed-certs-990820/id_rsa Username:docker}
	I1201 20:08:32.956569  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:08:32.975548  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1201 20:08:32.994740  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:08:33.015877  354303 provision.go:87] duration metric: took 295.584624ms to configureAuth
	I1201 20:08:33.015908  354303 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:08:33.016120  354303 config.go:182] Loaded profile config "embed-certs-990820": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:08:33.016223  354303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-990820
	I1201 20:08:33.038051  354303 main.go:143] libmachine: Using SSH client type: native
	I1201 20:08:33.038369  354303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1201 20:08:33.038404  354303 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:08:33.407613  354303 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:08:33.407637  354303 machine.go:97] duration metric: took 4.208672777s to provisionDockerMachine
	I1201 20:08:33.407651  354303 start.go:293] postStartSetup for "embed-certs-990820" (driver="docker")
	I1201 20:08:33.407663  354303 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:08:33.407746  354303 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:08:33.407794  354303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-990820
	I1201 20:08:33.429272  354303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/embed-certs-990820/id_rsa Username:docker}
	I1201 20:08:33.533629  354303 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:08:33.537918  354303 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:08:33.537949  354303 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:08:33.537964  354303 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:08:33.538115  354303 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:08:33.538209  354303 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:08:33.538359  354303 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:08:33.546059  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:08:33.564326  354303 start.go:296] duration metric: took 156.627784ms for postStartSetup
	I1201 20:08:33.564401  354303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:08:33.564464  354303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-990820
	I1201 20:08:33.584178  354303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/embed-certs-990820/id_rsa Username:docker}
	I1201 20:08:33.688342  354303 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:08:33.692972  354303 fix.go:56] duration metric: took 4.933103924s for fixHost
	I1201 20:08:33.693000  354303 start.go:83] releasing machines lock for "embed-certs-990820", held for 4.933158054s
	I1201 20:08:33.693099  354303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-990820
	I1201 20:08:33.713435  354303 ssh_runner.go:195] Run: cat /version.json
	I1201 20:08:33.713478  354303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-990820
	I1201 20:08:33.713542  354303 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:08:33.713625  354303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-990820
	I1201 20:08:33.732789  354303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/embed-certs-990820/id_rsa Username:docker}
	I1201 20:08:33.733054  354303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/embed-certs-990820/id_rsa Username:docker}
	I1201 20:08:33.829615  354303 ssh_runner.go:195] Run: systemctl --version
	I1201 20:08:33.895066  354303 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:08:33.949337  354303 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:08:33.955095  354303 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:08:33.955173  354303 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:08:33.963627  354303 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:08:33.963661  354303 start.go:496] detecting cgroup driver to use...
	I1201 20:08:33.963692  354303 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:08:33.963731  354303 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:08:33.979929  354303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:08:33.994529  354303 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:08:33.994585  354303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:08:34.011870  354303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:08:34.025895  354303 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:08:34.113622  354303 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:08:34.208681  354303 docker.go:234] disabling docker service ...
	I1201 20:08:34.208748  354303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:08:34.224220  354303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:08:34.239580  354303 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:08:34.346533  354303 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:08:34.433018  354303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:08:34.446193  354303 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:08:34.466353  354303 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:08:34.466416  354303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:08:34.481662  354303 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:08:34.481731  354303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:08:34.493202  354303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:08:34.504848  354303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:08:34.515574  354303 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:08:34.524349  354303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:08:34.533723  354303 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:08:34.542806  354303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:08:34.551829  354303 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:08:34.559328  354303 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:08:34.566768  354303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:08:34.655938  354303 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:08:34.784957  354303 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:08:34.785018  354303 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:08:34.789071  354303 start.go:564] Will wait 60s for crictl version
	I1201 20:08:34.789116  354303 ssh_runner.go:195] Run: which crictl
	I1201 20:08:34.792854  354303 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:08:34.818796  354303 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:08:34.818867  354303 ssh_runner.go:195] Run: crio --version
	I1201 20:08:34.846946  354303 ssh_runner.go:195] Run: crio --version
	I1201 20:08:34.875787  354303 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 20:08:34.876822  354303 cli_runner.go:164] Run: docker network inspect embed-certs-990820 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:08:34.896122  354303 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1201 20:08:34.900710  354303 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:08:34.911336  354303 kubeadm.go:884] updating cluster {Name:embed-certs-990820 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-990820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:08:34.911446  354303 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:08:34.911491  354303 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:08:34.944023  354303 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:08:34.944045  354303 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:08:34.944099  354303 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:08:34.973805  354303 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:08:34.973829  354303 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:08:34.973838  354303 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1201 20:08:34.973970  354303 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-990820 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-990820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:08:34.974070  354303 ssh_runner.go:195] Run: crio config
	I1201 20:08:35.024004  354303 cni.go:84] Creating CNI manager for ""
	I1201 20:08:35.024026  354303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:35.024044  354303 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:08:35.024069  354303 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-990820 NodeName:embed-certs-990820 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:08:35.024249  354303 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-990820"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:08:35.024339  354303 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:08:35.033240  354303 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:08:35.033377  354303 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:08:35.041695  354303 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1201 20:08:35.054624  354303 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:08:35.068692  354303 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1201 20:08:35.082350  354303 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:08:35.086323  354303 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:08:35.097114  354303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:08:35.193581  354303 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:08:35.218397  354303 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820 for IP: 192.168.94.2
	I1201 20:08:35.218419  354303 certs.go:195] generating shared ca certs ...
	I1201 20:08:35.218438  354303 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:35.218606  354303 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:08:35.218690  354303 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:08:35.218706  354303 certs.go:257] generating profile certs ...
	I1201 20:08:35.218829  354303 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/client.key
	I1201 20:08:35.218944  354303 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/apiserver.key.7a5013c4
	I1201 20:08:35.219004  354303 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/proxy-client.key
	I1201 20:08:35.219160  354303 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:08:35.219211  354303 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:08:35.219225  354303 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:08:35.219263  354303 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:08:35.219327  354303 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:08:35.219371  354303 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:08:35.219436  354303 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:08:35.220088  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:08:35.245422  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:08:35.268570  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:08:35.289795  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:08:35.316833  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1201 20:08:35.339230  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1201 20:08:35.361202  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:08:35.381895  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/embed-certs-990820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:08:35.400255  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:08:35.422513  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:08:35.444821  354303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:08:35.464498  354303 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:08:35.480234  354303 ssh_runner.go:195] Run: openssl version
	I1201 20:08:35.488980  354303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:08:35.500090  354303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:35.505936  354303 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:35.506093  354303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:35.560501  354303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:08:35.571686  354303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:08:35.583407  354303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:08:35.588386  354303 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:08:35.588441  354303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:08:35.644418  354303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:08:35.656310  354303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:08:35.668171  354303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:08:35.672852  354303 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:08:35.672930  354303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:08:35.728978  354303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:08:35.740565  354303 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:08:35.746509  354303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:08:35.801905  354303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:08:35.862806  354303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:08:35.954589  354303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:08:36.020211  354303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:08:36.082228  354303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:08:36.145549  354303 kubeadm.go:401] StartCluster: {Name:embed-certs-990820 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-990820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:36.145660  354303 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:08:36.145711  354303 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:08:36.203261  354303 cri.go:89] found id: "584186b54e74d08f4b6af4c9898f57737a8d5d0858f1cf2e7f22fcc29d1d0d0f"
	I1201 20:08:36.203295  354303 cri.go:89] found id: "25d3d677299ebe45e1a5514b80aaf8beaf32d1df3663ce2202e6bb7685a33a0b"
	I1201 20:08:36.203301  354303 cri.go:89] found id: "436c2d3a56ed714769b430e6e9a94e1e0be241f59ee8e5567f0147fc16a8b5af"
	I1201 20:08:36.203314  354303 cri.go:89] found id: "43e75c365156208b44d268aa4b8b8fce1d12a9782bd3c84385daeaddd340cca5"
	I1201 20:08:36.203322  354303 cri.go:89] found id: ""
	I1201 20:08:36.203371  354303 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:08:36.221801  354303 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:08:36Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:08:36.221867  354303 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:08:36.233622  354303 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:08:36.233641  354303 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:08:36.233889  354303 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:08:36.245102  354303 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:08:36.246166  354303 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-990820" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:36.246860  354303 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-990820" cluster setting kubeconfig missing "embed-certs-990820" context setting]
	I1201 20:08:36.247979  354303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:36.250079  354303 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:08:36.261021  354303 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1201 20:08:36.261108  354303 kubeadm.go:602] duration metric: took 27.460331ms to restartPrimaryControlPlane
	I1201 20:08:36.261130  354303 kubeadm.go:403] duration metric: took 115.591817ms to StartCluster
	I1201 20:08:36.261169  354303 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:36.261248  354303 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:36.263762  354303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:36.264275  354303 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:08:36.264378  354303 config.go:182] Loaded profile config "embed-certs-990820": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:08:36.264423  354303 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-990820"
	I1201 20:08:36.264490  354303 addons.go:70] Setting default-storageclass=true in profile "embed-certs-990820"
	I1201 20:08:36.264435  354303 addons.go:70] Setting dashboard=true in profile "embed-certs-990820"
	I1201 20:08:36.264517  354303 addons.go:239] Setting addon dashboard=true in "embed-certs-990820"
	I1201 20:08:36.264518  354303 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-990820"
	W1201 20:08:36.264526  354303 addons.go:248] addon dashboard should already be in state true
	I1201 20:08:36.264527  354303 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:08:36.264557  354303 host.go:66] Checking if "embed-certs-990820" exists ...
	I1201 20:08:36.264501  354303 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-990820"
	W1201 20:08:36.264601  354303 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:08:36.264620  354303 host.go:66] Checking if "embed-certs-990820" exists ...
	I1201 20:08:36.265480  354303 cli_runner.go:164] Run: docker container inspect embed-certs-990820 --format={{.State.Status}}
	I1201 20:08:36.266357  354303 cli_runner.go:164] Run: docker container inspect embed-certs-990820 --format={{.State.Status}}
	I1201 20:08:36.266908  354303 cli_runner.go:164] Run: docker container inspect embed-certs-990820 --format={{.State.Status}}
	I1201 20:08:36.279906  354303 out.go:179] * Verifying Kubernetes components...
	I1201 20:08:36.282806  354303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:08:36.303105  354303 addons.go:239] Setting addon default-storageclass=true in "embed-certs-990820"
	W1201 20:08:36.303130  354303 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:08:36.303205  354303 host.go:66] Checking if "embed-certs-990820" exists ...
	I1201 20:08:36.305828  354303 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:08:36.306398  354303 cli_runner.go:164] Run: docker container inspect embed-certs-990820 --format={{.State.Status}}
	I1201 20:08:36.307465  354303 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:08:36.308901  354303 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:08:36.308917  354303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:08:36.308969  354303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-990820
	I1201 20:08:36.314120  354303 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:08:31.956090  352497 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1201 20:08:31.961345  352497 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1201 20:08:31.962459  352497 api_server.go:141] control plane version: v1.35.0-beta.0
	I1201 20:08:31.962487  352497 api_server.go:131] duration metric: took 1.006518711s to wait for apiserver health ...
	I1201 20:08:31.962495  352497 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:08:31.966537  352497 system_pods.go:59] 8 kube-system pods found
	I1201 20:08:31.966579  352497 system_pods.go:61] "coredns-7d764666f9-6kzhv" [63c28884-3390-44f0-ba81-6f221ef923c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:08:31.966591  352497 system_pods.go:61] "etcd-no-preload-240359" [36c1e813-01d2-4cb6-a36b-50d4026fdac2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:08:31.966606  352497 system_pods.go:61] "kindnet-s7r55" [a8fe8570-bbb2-401d-92b1-9335633eea45] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1201 20:08:31.966616  352497 system_pods.go:61] "kube-apiserver-no-preload-240359" [a5fc590d-d925-47b0-b6d9-9f1e418f59e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:08:31.966628  352497 system_pods.go:61] "kube-controller-manager-no-preload-240359" [f334535d-397f-428e-a4da-4f64d87d3283] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:08:31.966641  352497 system_pods.go:61] "kube-proxy-zbbsb" [6e217924-5490-46d1-80c4-6354cd3c3f87] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1201 20:08:31.966652  352497 system_pods.go:61] "kube-scheduler-no-preload-240359" [4c309867-a6ed-4318-86a1-cba4e8178d41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:08:31.966659  352497 system_pods.go:61] "storage-provisioner" [55ca6ba6-903c-41c3-bf2f-47e674a452bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:08:31.966669  352497 system_pods.go:74] duration metric: took 4.168088ms to wait for pod list to return data ...
	I1201 20:08:31.966678  352497 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:08:31.969156  352497 default_sa.go:45] found service account: "default"
	I1201 20:08:31.969176  352497 default_sa.go:55] duration metric: took 2.4875ms for default service account to be created ...
	I1201 20:08:31.969184  352497 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:08:31.972440  352497 system_pods.go:86] 8 kube-system pods found
	I1201 20:08:31.972463  352497 system_pods.go:89] "coredns-7d764666f9-6kzhv" [63c28884-3390-44f0-ba81-6f221ef923c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:08:31.972474  352497 system_pods.go:89] "etcd-no-preload-240359" [36c1e813-01d2-4cb6-a36b-50d4026fdac2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:08:31.972488  352497 system_pods.go:89] "kindnet-s7r55" [a8fe8570-bbb2-401d-92b1-9335633eea45] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1201 20:08:31.972501  352497 system_pods.go:89] "kube-apiserver-no-preload-240359" [a5fc590d-d925-47b0-b6d9-9f1e418f59e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:08:31.972513  352497 system_pods.go:89] "kube-controller-manager-no-preload-240359" [f334535d-397f-428e-a4da-4f64d87d3283] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:08:31.972524  352497 system_pods.go:89] "kube-proxy-zbbsb" [6e217924-5490-46d1-80c4-6354cd3c3f87] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1201 20:08:31.972539  352497 system_pods.go:89] "kube-scheduler-no-preload-240359" [4c309867-a6ed-4318-86a1-cba4e8178d41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:08:31.972551  352497 system_pods.go:89] "storage-provisioner" [55ca6ba6-903c-41c3-bf2f-47e674a452bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:08:31.972565  352497 system_pods.go:126] duration metric: took 3.372111ms to wait for k8s-apps to be running ...
	I1201 20:08:31.972576  352497 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:08:31.972620  352497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:08:31.986086  352497 system_svc.go:56] duration metric: took 13.503272ms WaitForService to wait for kubelet
	I1201 20:08:31.986111  352497 kubeadm.go:587] duration metric: took 3.029750228s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:08:31.986137  352497 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:08:31.989471  352497 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:08:31.989499  352497 node_conditions.go:123] node cpu capacity is 8
	I1201 20:08:31.989520  352497 node_conditions.go:105] duration metric: took 3.375083ms to run NodePressure ...
	I1201 20:08:31.989537  352497 start.go:242] waiting for startup goroutines ...
	I1201 20:08:31.989547  352497 start.go:247] waiting for cluster config update ...
	I1201 20:08:31.989565  352497 start.go:256] writing updated cluster config ...
	I1201 20:08:31.989855  352497 ssh_runner.go:195] Run: rm -f paused
	I1201 20:08:31.994470  352497 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:08:31.999485  352497 pod_ready.go:83] waiting for pod "coredns-7d764666f9-6kzhv" in "kube-system" namespace to be "Ready" or be gone ...
	W1201 20:08:34.006460  352497 pod_ready.go:104] pod "coredns-7d764666f9-6kzhv" is not "Ready", error: <nil>
	W1201 20:08:36.014041  352497 pod_ready.go:104] pod "coredns-7d764666f9-6kzhv" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 01 20:08:25 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:25.579002161Z" level=info msg="Starting container: e0ec61dc3a7307f17794e1c9c56668b8107e8d6fd5521e086ea88d669f5031e1" id=8dbfa050-baa7-4f0e-9468-ef98352a4d7d name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:08:25 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:25.580933351Z" level=info msg="Started container" PID=1883 containerID=e0ec61dc3a7307f17794e1c9c56668b8107e8d6fd5521e086ea88d669f5031e1 description=kube-system/coredns-66bc5c9577-hf646/coredns id=8dbfa050-baa7-4f0e-9468-ef98352a4d7d name=/runtime.v1.RuntimeService/StartContainer sandboxID=22d19e114c919dd9f239eb9d7a33c38f02132d3038cbb38a2304a28d40c13713
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.211018995Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8ec549c8-d0d8-4571-bf53-d2402d0993ea name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.211112332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.216689227Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:62d32cc100552ef03b54b9172511626d78d20345bce8024b9f73fc25a5a433cd UID:0515a30f-e9f6-4729-b544-8ee69479d1f4 NetNS:/var/run/netns/b14d4fbb-7aea-4fd2-bf36-c0c9008b246d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006143c8}] Aliases:map[]}"
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.216722142Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.228569181Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:62d32cc100552ef03b54b9172511626d78d20345bce8024b9f73fc25a5a433cd UID:0515a30f-e9f6-4729-b544-8ee69479d1f4 NetNS:/var/run/netns/b14d4fbb-7aea-4fd2-bf36-c0c9008b246d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006143c8}] Aliases:map[]}"
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.228753678Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.229841298Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.231105449Z" level=info msg="Ran pod sandbox 62d32cc100552ef03b54b9172511626d78d20345bce8024b9f73fc25a5a433cd with infra container: default/busybox/POD" id=8ec549c8-d0d8-4571-bf53-d2402d0993ea name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.2325721Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2121f756-448c-4013-b8ad-67d9835a73d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.232736364Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2121f756-448c-4013-b8ad-67d9835a73d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.23280331Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2121f756-448c-4013-b8ad-67d9835a73d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.233728201Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=be54470b-e79c-4be4-8fe6-6c575025bec5 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:08:28 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:28.235938594Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 01 20:08:29 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:29.588607088Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=be54470b-e79c-4be4-8fe6-6c575025bec5 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:08:29 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:29.589386108Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e0e183bd-2107-4045-8856-4403b3703310 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:29 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:29.590787222Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=66a38e5a-5f6b-4b7a-bf13-33714459494c name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:29 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:29.594484049Z" level=info msg="Creating container: default/busybox/busybox" id=66db09b3-f69d-45c9-9f58-abf587cd39d3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:29 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:29.594622536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:29 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:29.598693543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:29 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:29.599249439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:29 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:29.626234551Z" level=info msg="Created container e0e403ae8a69d0d83af44d0334a4b173d82f4b6175af759b14a539473d2ff6b7: default/busybox/busybox" id=66db09b3-f69d-45c9-9f58-abf587cd39d3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:29 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:29.626886611Z" level=info msg="Starting container: e0e403ae8a69d0d83af44d0334a4b173d82f4b6175af759b14a539473d2ff6b7" id=e6af65f2-feec-4c04-b918-5b1df38a572b name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:08:29 default-k8s-diff-port-009682 crio[781]: time="2025-12-01T20:08:29.629078395Z" level=info msg="Started container" PID=1959 containerID=e0e403ae8a69d0d83af44d0334a4b173d82f4b6175af759b14a539473d2ff6b7 description=default/busybox/busybox id=e6af65f2-feec-4c04-b918-5b1df38a572b name=/runtime.v1.RuntimeService/StartContainer sandboxID=62d32cc100552ef03b54b9172511626d78d20345bce8024b9f73fc25a5a433cd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	e0e403ae8a69d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   62d32cc100552       busybox                                                default
	e0ec61dc3a730       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   22d19e114c919       coredns-66bc5c9577-hf646                               kube-system
	0d3d3b988aad6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   dff12e6e85323       storage-provisioner                                    kube-system
	83286ff1402e7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   ebc3993645a2a       kindnet-pqt6x                                          kube-system
	7c77bf72c6b1f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   cd96695327fea       kube-proxy-fjn7h                                       kube-system
	164cbc981decb       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      34 seconds ago      Running             kube-apiserver            0                   6620429d57c57       kube-apiserver-default-k8s-diff-port-009682            kube-system
	2d6772ce84c3e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      34 seconds ago      Running             kube-scheduler            0                   c48caca447450       kube-scheduler-default-k8s-diff-port-009682            kube-system
	854e7fed9ac95       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      34 seconds ago      Running             kube-controller-manager   0                   56e8c6e99edf4       kube-controller-manager-default-k8s-diff-port-009682   kube-system
	8cbf520ec4561       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   2a6cb8798436b       etcd-default-k8s-diff-port-009682                      kube-system
	
	
	==> coredns [e0ec61dc3a7307f17794e1c9c56668b8107e8d6fd5521e086ea88d669f5031e1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46878 - 49071 "HINFO IN 4861086191620958968.6043177437826347391. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029866445s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-009682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-009682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=default-k8s-diff-port-009682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_08_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:08:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-009682
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:08:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:08:25 +0000   Mon, 01 Dec 2025 20:08:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:08:25 +0000   Mon, 01 Dec 2025 20:08:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:08:25 +0000   Mon, 01 Dec 2025 20:08:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:08:25 +0000   Mon, 01 Dec 2025 20:08:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-009682
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                05b6424d-f307-4593-b87d-4cd8ab421755
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-hf646                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-009682                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-pqt6x                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-009682             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-009682    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-fjn7h                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-009682             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node default-k8s-diff-port-009682 event: Registered Node default-k8s-diff-port-009682 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-009682 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [8cbf520ec45611aba375222ec24e1932aced5007bd441ef1bcaf2e4084085044] <==
	{"level":"warn","ts":"2025-12-01T20:08:04.940623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:04.946834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:04.954989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:04.961577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:04.970057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:04.976587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:04.983445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:04.990069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:04.996534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.003236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.011463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.017819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.024410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.030726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.037353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.043603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.050103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.056356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.062791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.069137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.075567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.081765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.093647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.099997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:05.106549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45638","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:08:38 up  1:51,  0 user,  load average: 4.55, 3.39, 2.34
	Linux default-k8s-diff-port-009682 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [83286ff1402e743540d0d2f3fb12c0df460a9287f8474d8218e0b41e262e61dc] <==
	I1201 20:08:14.332930       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:08:14.333159       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1201 20:08:14.333365       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:08:14.333406       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:08:14.333422       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:08:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:08:14.629674       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:08:14.758653       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:08:14.758736       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:08:14.758967       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:08:15.159040       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:08:15.159071       1 metrics.go:72] Registering metrics
	I1201 20:08:15.159149       1 controller.go:711] "Syncing nftables rules"
	I1201 20:08:24.633188       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1201 20:08:24.633249       1 main.go:301] handling current node
	I1201 20:08:34.630603       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1201 20:08:34.630640       1 main.go:301] handling current node
	
	
	==> kube-apiserver [164cbc981decb10aa357586b0592e30042e435949d6573de0c29426d4de3a49c] <==
	E1201 20:08:05.702822       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1201 20:08:05.706826       1 controller.go:667] quota admission added evaluator for: namespaces
	I1201 20:08:05.710336       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:08:05.710649       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1201 20:08:05.719765       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:08:05.720031       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1201 20:08:05.905478       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:08:06.509627       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1201 20:08:06.513566       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1201 20:08:06.513590       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:08:06.938064       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:08:06.973228       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:08:07.010975       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1201 20:08:07.016310       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1201 20:08:07.017393       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:08:07.020904       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:08:07.550206       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:08:08.031916       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:08:08.040643       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1201 20:08:08.046944       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1201 20:08:12.756041       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:08:12.761090       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:08:13.452733       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:08:13.654301       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1201 20:08:36.036805       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:38944: use of closed network connection
	
	
	==> kube-controller-manager [854e7fed9ac950a9668eabf4abdc241235e02373f86a203033ccc2692a5baeb6] <==
	I1201 20:08:12.548340       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1201 20:08:12.548349       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1201 20:08:12.548521       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1201 20:08:12.549765       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1201 20:08:12.549784       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1201 20:08:12.549798       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1201 20:08:12.549801       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1201 20:08:12.549830       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1201 20:08:12.550127       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1201 20:08:12.550278       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1201 20:08:12.550367       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1201 20:08:12.550642       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1201 20:08:12.550648       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1201 20:08:12.551356       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1201 20:08:12.553052       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1201 20:08:12.553324       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:08:12.554718       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:08:12.564824       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1201 20:08:12.564875       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 20:08:12.564941       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 20:08:12.564957       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 20:08:12.564966       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 20:08:12.570043       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:08:12.572657       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-009682" podCIDRs=["10.244.0.0/24"]
	I1201 20:08:27.501211       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7c77bf72c6b1fd9ab96d088c8835f0f5f2eef07e86d96e12baaaf796fcc6f849] <==
	I1201 20:08:14.104262       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:08:14.170654       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 20:08:14.271701       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 20:08:14.271769       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1201 20:08:14.271886       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:08:14.297083       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:08:14.297145       1 server_linux.go:132] "Using iptables Proxier"
	I1201 20:08:14.304397       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:08:14.304891       1 server.go:527] "Version info" version="v1.34.2"
	I1201 20:08:14.304917       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:08:14.307080       1 config.go:200] "Starting service config controller"
	I1201 20:08:14.307115       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:08:14.307242       1 config.go:309] "Starting node config controller"
	I1201 20:08:14.307280       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:08:14.307321       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:08:14.307469       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:08:14.307491       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:08:14.307513       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:08:14.307525       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:08:14.408041       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:08:14.408143       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:08:14.408169       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2d6772ce84c3edad586a4febe31414bd053f8ed98d494d00f1c84babe4557963] <==
	E1201 20:08:05.579909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 20:08:05.579920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 20:08:05.580023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 20:08:05.580344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 20:08:05.580357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 20:08:05.580436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 20:08:05.580438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 20:08:05.580538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 20:08:05.580532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 20:08:05.580605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 20:08:05.580591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 20:08:05.580533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1201 20:08:05.580736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 20:08:05.580747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 20:08:05.580776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 20:08:05.580783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 20:08:06.416051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 20:08:06.442478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 20:08:06.444440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 20:08:06.465810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 20:08:06.510277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 20:08:06.575892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 20:08:06.674389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 20:08:06.735737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1201 20:08:07.177139       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 20:08:08 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:08.911161    1350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-009682" podStartSLOduration=2.911141314 podStartE2EDuration="2.911141314s" podCreationTimestamp="2025-12-01 20:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:08:08.911109 +0000 UTC m=+1.122040402" watchObservedRunningTime="2025-12-01 20:08:08.911141314 +0000 UTC m=+1.122072719"
	Dec 01 20:08:08 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:08.933980    1350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-009682" podStartSLOduration=1.933961333 podStartE2EDuration="1.933961333s" podCreationTimestamp="2025-12-01 20:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:08:08.923442956 +0000 UTC m=+1.134374361" watchObservedRunningTime="2025-12-01 20:08:08.933961333 +0000 UTC m=+1.144892738"
	Dec 01 20:08:08 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:08.934110    1350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-009682" podStartSLOduration=1.934100453 podStartE2EDuration="1.934100453s" podCreationTimestamp="2025-12-01 20:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:08:08.933947418 +0000 UTC m=+1.144878822" watchObservedRunningTime="2025-12-01 20:08:08.934100453 +0000 UTC m=+1.145031858"
	Dec 01 20:08:08 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:08.942309    1350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-009682" podStartSLOduration=1.942268451 podStartE2EDuration="1.942268451s" podCreationTimestamp="2025-12-01 20:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:08:08.942065071 +0000 UTC m=+1.152996496" watchObservedRunningTime="2025-12-01 20:08:08.942268451 +0000 UTC m=+1.153199856"
	Dec 01 20:08:12 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:12.603556    1350 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 01 20:08:12 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:12.604417    1350 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 01 20:08:13 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:13.689431    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r9m5\" (UniqueName: \"kubernetes.io/projected/f4fdbbdd-f85d-420b-b618-6edfd4259349-kube-api-access-7r9m5\") pod \"kube-proxy-fjn7h\" (UID: \"f4fdbbdd-f85d-420b-b618-6edfd4259349\") " pod="kube-system/kube-proxy-fjn7h"
	Dec 01 20:08:13 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:13.689654    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4fdbbdd-f85d-420b-b618-6edfd4259349-kube-proxy\") pod \"kube-proxy-fjn7h\" (UID: \"f4fdbbdd-f85d-420b-b618-6edfd4259349\") " pod="kube-system/kube-proxy-fjn7h"
	Dec 01 20:08:13 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:13.689699    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4fdbbdd-f85d-420b-b618-6edfd4259349-xtables-lock\") pod \"kube-proxy-fjn7h\" (UID: \"f4fdbbdd-f85d-420b-b618-6edfd4259349\") " pod="kube-system/kube-proxy-fjn7h"
	Dec 01 20:08:13 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:13.689719    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4fdbbdd-f85d-420b-b618-6edfd4259349-lib-modules\") pod \"kube-proxy-fjn7h\" (UID: \"f4fdbbdd-f85d-420b-b618-6edfd4259349\") " pod="kube-system/kube-proxy-fjn7h"
	Dec 01 20:08:13 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:13.790596    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/358ffbfc-91b7-4ce9-a3ed-987d5af5abcf-cni-cfg\") pod \"kindnet-pqt6x\" (UID: \"358ffbfc-91b7-4ce9-a3ed-987d5af5abcf\") " pod="kube-system/kindnet-pqt6x"
	Dec 01 20:08:13 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:13.790639    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6885t\" (UniqueName: \"kubernetes.io/projected/358ffbfc-91b7-4ce9-a3ed-987d5af5abcf-kube-api-access-6885t\") pod \"kindnet-pqt6x\" (UID: \"358ffbfc-91b7-4ce9-a3ed-987d5af5abcf\") " pod="kube-system/kindnet-pqt6x"
	Dec 01 20:08:13 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:13.790696    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/358ffbfc-91b7-4ce9-a3ed-987d5af5abcf-xtables-lock\") pod \"kindnet-pqt6x\" (UID: \"358ffbfc-91b7-4ce9-a3ed-987d5af5abcf\") " pod="kube-system/kindnet-pqt6x"
	Dec 01 20:08:13 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:13.790815    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/358ffbfc-91b7-4ce9-a3ed-987d5af5abcf-lib-modules\") pod \"kindnet-pqt6x\" (UID: \"358ffbfc-91b7-4ce9-a3ed-987d5af5abcf\") " pod="kube-system/kindnet-pqt6x"
	Dec 01 20:08:14 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:14.913897    1350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pqt6x" podStartSLOduration=1.9138768000000002 podStartE2EDuration="1.9138768s" podCreationTimestamp="2025-12-01 20:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:08:14.913872873 +0000 UTC m=+7.124804278" watchObservedRunningTime="2025-12-01 20:08:14.9138768 +0000 UTC m=+7.124808206"
	Dec 01 20:08:14 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:14.924455    1350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fjn7h" podStartSLOduration=1.924434675 podStartE2EDuration="1.924434675s" podCreationTimestamp="2025-12-01 20:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:08:14.924421631 +0000 UTC m=+7.135353035" watchObservedRunningTime="2025-12-01 20:08:14.924434675 +0000 UTC m=+7.135366081"
	Dec 01 20:08:25 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:25.200838    1350 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 01 20:08:25 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:25.274526    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/329b9699-cf53-4f5f-b7c3-52f77070a59f-tmp\") pod \"storage-provisioner\" (UID: \"329b9699-cf53-4f5f-b7c3-52f77070a59f\") " pod="kube-system/storage-provisioner"
	Dec 01 20:08:25 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:25.274592    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/959685f2-3196-405c-b2f8-bb177bd28bcf-config-volume\") pod \"coredns-66bc5c9577-hf646\" (UID: \"959685f2-3196-405c-b2f8-bb177bd28bcf\") " pod="kube-system/coredns-66bc5c9577-hf646"
	Dec 01 20:08:25 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:25.274634    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85fw7\" (UniqueName: \"kubernetes.io/projected/959685f2-3196-405c-b2f8-bb177bd28bcf-kube-api-access-85fw7\") pod \"coredns-66bc5c9577-hf646\" (UID: \"959685f2-3196-405c-b2f8-bb177bd28bcf\") " pod="kube-system/coredns-66bc5c9577-hf646"
	Dec 01 20:08:25 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:25.274662    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pc6r\" (UniqueName: \"kubernetes.io/projected/329b9699-cf53-4f5f-b7c3-52f77070a59f-kube-api-access-5pc6r\") pod \"storage-provisioner\" (UID: \"329b9699-cf53-4f5f-b7c3-52f77070a59f\") " pod="kube-system/storage-provisioner"
	Dec 01 20:08:25 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:25.937992    1350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hf646" podStartSLOduration=12.937972658 podStartE2EDuration="12.937972658s" podCreationTimestamp="2025-12-01 20:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:08:25.937964844 +0000 UTC m=+18.148896249" watchObservedRunningTime="2025-12-01 20:08:25.937972658 +0000 UTC m=+18.148904062"
	Dec 01 20:08:25 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:25.947141    1350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.94712305 podStartE2EDuration="11.94712305s" podCreationTimestamp="2025-12-01 20:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:08:25.947055005 +0000 UTC m=+18.157986411" watchObservedRunningTime="2025-12-01 20:08:25.94712305 +0000 UTC m=+18.158054456"
	Dec 01 20:08:27 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:27.995092    1350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-687qh\" (UniqueName: \"kubernetes.io/projected/0515a30f-e9f6-4729-b544-8ee69479d1f4-kube-api-access-687qh\") pod \"busybox\" (UID: \"0515a30f-e9f6-4729-b544-8ee69479d1f4\") " pod="default/busybox"
	Dec 01 20:08:29 default-k8s-diff-port-009682 kubelet[1350]: I1201 20:08:29.953743    1350 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.596644221 podStartE2EDuration="2.953719301s" podCreationTimestamp="2025-12-01 20:08:27 +0000 UTC" firstStartedPulling="2025-12-01 20:08:28.233168733 +0000 UTC m=+20.444100129" lastFinishedPulling="2025-12-01 20:08:29.590243824 +0000 UTC m=+21.801175209" observedRunningTime="2025-12-01 20:08:29.953676256 +0000 UTC m=+22.164607662" watchObservedRunningTime="2025-12-01 20:08:29.953719301 +0000 UTC m=+22.164650707"
	
	
	==> storage-provisioner [0d3d3b988aad601ae7c223f358a5df0cf9cc9fd9a8e0b65480d77738c68b2902] <==
	I1201 20:08:25.586422       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1201 20:08:25.594966       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1201 20:08:25.595030       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1201 20:08:25.597465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:25.603663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:08:25.603851       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1201 20:08:25.604011       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-009682_258ce214-8c5b-4d72-acd5-0d1839b97042!
	I1201 20:08:25.603990       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40bc02c5-697a-4268-94f8-e188e6079112", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-009682_258ce214-8c5b-4d72-acd5-0d1839b97042 became leader
	W1201 20:08:25.606493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:25.610164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:08:25.704358       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-009682_258ce214-8c5b-4d72-acd5-0d1839b97042!
	W1201 20:08:27.613269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:27.617571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:29.621480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:29.628381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:31.632206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:31.636474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:33.640375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:33.646040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:35.650172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:35.654994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:37.660064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:08:37.667152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-009682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-456990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-456990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (327.439635ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-456990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-456990
helpers_test.go:243: (dbg) docker inspect newest-cni-456990:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d",
	        "Created": "2025-12-01T20:08:39.724872977Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 359661,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:08:39.787024447Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d/hosts",
	        "LogPath": "/var/lib/docker/containers/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d-json.log",
	        "Name": "/newest-cni-456990",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-456990:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-456990",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d",
	                "LowerDir": "/var/lib/docker/overlay2/d5d1301ce923a60ce00166554a60c8b9aae799f69167d17f72e4c243b476ffee-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5d1301ce923a60ce00166554a60c8b9aae799f69167d17f72e4c243b476ffee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5d1301ce923a60ce00166554a60c8b9aae799f69167d17f72e4c243b476ffee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5d1301ce923a60ce00166554a60c8b9aae799f69167d17f72e4c243b476ffee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-456990",
	                "Source": "/var/lib/docker/volumes/newest-cni-456990/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-456990",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-456990",
	                "name.minikube.sigs.k8s.io": "newest-cni-456990",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8c6ceca37e851f08632d0edaf7b7d8fedbd91bc9f5196ff78226c694bc9b10f7",
	            "SandboxKey": "/var/run/docker/netns/8c6ceca37e85",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-456990": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6836073b2b5aeb1e24b66aebffccfdf9f8c813eeb874cb4432e2209cabcc4ee5",
	                    "EndpointID": "ab9a682b7f26531b7d43c949f494823af412e812ac19c652fe83dd2948761e2b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "da:63:25:0a:c1:d7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-456990",
	                        "9f5dab6a37e8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-456990 -n newest-cni-456990
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-456990 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-456990 logs -n 25: (1.229855001s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p bridge-551864 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                        │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ ssh     │ -p bridge-551864 sudo crio config                                                                                                                                                                                                                    │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p bridge-551864                                                                                                                                                                                                                                     │ bridge-551864                │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-003720                                                                                                                                                                                                                      │ disable-driver-mounts-003720 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-217464 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p old-k8s-version-217464 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-240359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p no-preload-240359 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-990820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p embed-certs-990820 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p no-preload-240359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ old-k8s-version-217464 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ pause   │ -p old-k8s-version-217464 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-990820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ stop    │ -p default-k8s-diff-port-009682 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-009682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-456990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:08:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:08:57.524741  363421 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:08:57.524856  363421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:57.524866  363421 out.go:374] Setting ErrFile to fd 2...
	I1201 20:08:57.524872  363421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:57.525166  363421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:08:57.525742  363421 out.go:368] Setting JSON to false
	I1201 20:08:57.527230  363421 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6688,"bootTime":1764613049,"procs":364,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:08:57.527326  363421 start.go:143] virtualization: kvm guest
	I1201 20:08:57.529688  363421 out.go:179] * [default-k8s-diff-port-009682] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:08:57.530978  363421 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:08:57.530985  363421 notify.go:221] Checking for updates...
	I1201 20:08:57.532313  363421 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:08:57.533552  363421 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:57.534766  363421 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:08:57.535947  363421 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:08:57.537115  363421 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:08:57.538758  363421 config.go:182] Loaded profile config "default-k8s-diff-port-009682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:08:57.539252  363421 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:08:57.564657  363421 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:08:57.564748  363421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:57.627789  363421 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:83 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-01 20:08:57.613982153 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:57.627885  363421 docker.go:319] overlay module found
	I1201 20:08:57.629736  363421 out.go:179] * Using the docker driver based on existing profile
	I1201 20:08:57.630805  363421 start.go:309] selected driver: docker
	I1201 20:08:57.630817  363421 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:57.630891  363421 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:08:57.631486  363421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:57.694034  363421 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-01 20:08:57.682818846 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:57.694423  363421 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:08:57.694466  363421 cni.go:84] Creating CNI manager for ""
	I1201 20:08:57.694533  363421 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:57.694577  363421 start.go:353] cluster config:
	{Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:57.696647  363421 out.go:179] * Starting "default-k8s-diff-port-009682" primary control-plane node in "default-k8s-diff-port-009682" cluster
	I1201 20:08:57.697915  363421 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:08:57.699088  363421 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:08:54.033979  358766 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.451822367s)
	I1201 20:08:54.034006  358766 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1201 20:08:54.034040  358766 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1201 20:08:54.034079  358766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1201 20:08:55.285959  358766 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.251855407s)
	I1201 20:08:55.285986  358766 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1201 20:08:55.286009  358766 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1201 20:08:55.286056  358766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1201 20:08:55.835833  358766 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1201 20:08:55.835878  358766 cache_images.go:125] Successfully loaded all cached images
	I1201 20:08:55.835887  358766 cache_images.go:94] duration metric: took 9.220203533s to LoadCachedImages
	I1201 20:08:55.835902  358766 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:08:55.836000  358766 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-456990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:08:55.836092  358766 ssh_runner.go:195] Run: crio config
	I1201 20:08:55.882185  358766 cni.go:84] Creating CNI manager for ""
	I1201 20:08:55.882204  358766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:55.882221  358766 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1201 20:08:55.882240  358766 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456990 NodeName:newest-cni-456990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:08:55.882388  358766 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-456990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:08:55.882456  358766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:08:55.896478  358766 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1201 20:08:55.896542  358766 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:08:55.905428  358766 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1201 20:08:55.905471  358766 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1201 20:08:55.905478  358766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:08:55.905492  358766 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1201 20:08:55.905548  358766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1201 20:08:55.905560  358766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1201 20:08:55.924100  358766 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1201 20:08:55.924135  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1201 20:08:55.924162  358766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1201 20:08:55.924163  358766 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1201 20:08:55.924196  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1201 20:08:55.931240  358766 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1201 20:08:55.931269  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1201 20:08:56.484601  358766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:08:56.493223  358766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:08:56.506733  358766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:08:56.551910  358766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1201 20:08:56.565479  358766 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:08:56.569659  358766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:08:56.674504  358766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:08:56.766035  358766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:08:56.790444  358766 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990 for IP: 192.168.76.2
	I1201 20:08:56.790466  358766 certs.go:195] generating shared ca certs ...
	I1201 20:08:56.790488  358766 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:56.790666  358766 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:08:56.790711  358766 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:08:56.790722  358766 certs.go:257] generating profile certs ...
	I1201 20:08:56.790775  358766 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key
	I1201 20:08:56.790787  358766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.crt with IP's: []
	I1201 20:08:56.856182  358766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.crt ...
	I1201 20:08:56.856207  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.crt: {Name:mk188d1d1ba3b1359a8c4c959ae5d3c192a20a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:56.856394  358766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key ...
	I1201 20:08:56.856408  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key: {Name:mkb94c2da30d31143505840f4576d1cd1a4db927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:56.856490  358766 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757
	I1201 20:08:56.856504  358766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt.79f10757 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1201 20:08:57.050302  358766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt.79f10757 ...
	I1201 20:08:57.050328  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt.79f10757: {Name:mkeefb489f4b625e46090918386fdc47c61b5f6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:57.050500  358766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757 ...
	I1201 20:08:57.050517  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757: {Name:mkf596c61e744a065cd8401e41d8e454de70b079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:57.050632  358766 certs.go:382] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt.79f10757 -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt
	I1201 20:08:57.050717  358766 certs.go:386] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757 -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key
	I1201 20:08:57.050771  358766 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key
	I1201 20:08:57.050786  358766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt with IP's: []
	I1201 20:08:57.090707  358766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt ...
	I1201 20:08:57.090730  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt: {Name:mk173cd6fe67eab6f70384a04dff60d8ad263813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:57.090894  358766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key ...
	I1201 20:08:57.090908  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key: {Name:mk07102f58d64e403b75622a5498a55b5a7d2682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:57.091078  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:08:57.091119  358766 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:08:57.091129  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:08:57.091155  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:08:57.091178  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:08:57.091204  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:08:57.091249  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:08:57.091846  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:08:57.110296  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:08:57.127543  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:08:57.145135  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:08:57.161965  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:08:57.178832  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:08:57.196202  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:08:57.216297  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:08:57.235646  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:08:57.255802  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:08:57.274205  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:08:57.291845  358766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:08:57.305221  358766 ssh_runner.go:195] Run: openssl version
	I1201 20:08:57.311715  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:08:57.321501  358766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:08:57.325823  358766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:08:57.325889  358766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:08:57.365528  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:08:57.375267  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:08:57.384499  358766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:57.388796  358766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:57.388853  358766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:57.427537  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:08:57.436653  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:08:57.446332  358766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:08:57.450883  358766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:08:57.450941  358766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:08:57.485407  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:08:57.494810  358766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:08:57.498985  358766 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 20:08:57.499041  358766 kubeadm.go:401] StartCluster: {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:57.499130  358766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:08:57.499181  358766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:08:57.528197  358766 cri.go:89] found id: ""
	I1201 20:08:57.528247  358766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:08:57.536955  358766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:08:57.545150  358766 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 20:08:57.545217  358766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:08:57.553840  358766 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 20:08:57.553872  358766 kubeadm.go:158] found existing configuration files:
	
	I1201 20:08:57.553923  358766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 20:08:57.562547  358766 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 20:08:57.562603  358766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 20:08:57.570825  358766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 20:08:57.579016  358766 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 20:08:57.579104  358766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:08:57.588155  358766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 20:08:57.598007  358766 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 20:08:57.598081  358766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:08:57.607460  358766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 20:08:57.616501  358766 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 20:08:57.616576  358766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:08:57.625112  358766 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 20:08:57.668430  358766 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 20:08:57.668522  358766 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 20:08:57.700560  363421 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:08:57.700599  363421 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 20:08:57.700606  363421 cache.go:65] Caching tarball of preloaded images
	I1201 20:08:57.700646  363421 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:08:57.700699  363421 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:08:57.700709  363421 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:08:57.700830  363421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/config.json ...
	I1201 20:08:57.725595  363421 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:08:57.725622  363421 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:08:57.725643  363421 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:08:57.725678  363421 start.go:360] acquireMachinesLock for default-k8s-diff-port-009682: {Name:mk42586c39f050856fb58aa29e83d0a77c4546b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:08:57.725749  363421 start.go:364] duration metric: took 47.794µs to acquireMachinesLock for "default-k8s-diff-port-009682"
	I1201 20:08:57.725771  363421 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:08:57.725786  363421 fix.go:54] fixHost starting: 
	I1201 20:08:57.726056  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:08:57.747795  363421 fix.go:112] recreateIfNeeded on default-k8s-diff-port-009682: state=Stopped err=<nil>
	W1201 20:08:57.747827  363421 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 20:08:57.757685  358766 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 20:08:57.757794  358766 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1201 20:08:57.757867  358766 kubeadm.go:319] OS: Linux
	I1201 20:08:57.757937  358766 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 20:08:57.758000  358766 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 20:08:57.758103  358766 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 20:08:57.758195  358766 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 20:08:57.758280  358766 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 20:08:57.758368  358766 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 20:08:57.758454  358766 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 20:08:57.758515  358766 kubeadm.go:319] CGROUPS_IO: enabled
	I1201 20:08:57.824201  358766 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 20:08:57.824361  358766 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 20:08:57.824478  358766 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 20:08:57.839908  358766 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1201 20:08:54.705077  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	W1201 20:08:57.204772  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	I1201 20:08:57.842269  358766 out.go:252]   - Generating certificates and keys ...
	I1201 20:08:57.842407  358766 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 20:08:57.842551  358766 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 20:08:57.881252  358766 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 20:08:58.037461  358766 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 20:08:58.107548  358766 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 20:08:58.187232  358766 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 20:08:58.505054  358766 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 20:08:58.505252  358766 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456990] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1201 20:08:58.539384  358766 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 20:08:58.539557  358766 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456990] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1201 20:08:58.601325  358766 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 20:08:58.651270  358766 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 20:08:58.937961  358766 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 20:08:58.938159  358766 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 20:08:59.070341  358766 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 20:08:59.130405  358766 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 20:08:59.174058  358766 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 20:08:59.235555  358766 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 20:08:59.401392  358766 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 20:08:59.401904  358766 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 20:08:59.405522  358766 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1201 20:08:58.006721  352497 pod_ready.go:104] pod "coredns-7d764666f9-6kzhv" is not "Ready", error: <nil>
	W1201 20:09:00.505892  352497 pod_ready.go:104] pod "coredns-7d764666f9-6kzhv" is not "Ready", error: <nil>
	I1201 20:08:57.749349  363421 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-009682" ...
	I1201 20:08:57.749457  363421 cli_runner.go:164] Run: docker start default-k8s-diff-port-009682
	I1201 20:08:58.018381  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:08:58.043206  363421 kic.go:430] container "default-k8s-diff-port-009682" state is running.
	I1201 20:08:58.043709  363421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:08:58.063866  363421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/config.json ...
	I1201 20:08:58.064140  363421 machine.go:94] provisionDockerMachine start ...
	I1201 20:08:58.064229  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:08:58.083160  363421 main.go:143] libmachine: Using SSH client type: native
	I1201 20:08:58.083444  363421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1201 20:08:58.083458  363421 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:08:58.084209  363421 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37088->127.0.0.1:33133: read: connection reset by peer
	I1201 20:09:01.230589  363421 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-009682
	
	I1201 20:09:01.230617  363421 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-009682"
	I1201 20:09:01.230674  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:01.253348  363421 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:01.253664  363421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1201 20:09:01.253688  363421 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-009682 && echo "default-k8s-diff-port-009682" | sudo tee /etc/hostname
	I1201 20:09:01.411152  363421 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-009682
	
	I1201 20:09:01.411226  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:01.435481  363421 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:01.435749  363421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1201 20:09:01.435776  363421 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-009682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-009682/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-009682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:09:01.579541  363421 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:09:01.579565  363421 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:09:01.579613  363421 ubuntu.go:190] setting up certificates
	I1201 20:09:01.579630  363421 provision.go:84] configureAuth start
	I1201 20:09:01.579679  363421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:09:01.598330  363421 provision.go:143] copyHostCerts
	I1201 20:09:01.598405  363421 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:09:01.598423  363421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:09:01.598511  363421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:09:01.598683  363421 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:09:01.598697  363421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:09:01.598736  363421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:09:01.598833  363421 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:09:01.598844  363421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:09:01.598881  363421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:09:01.598980  363421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-009682 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-009682 localhost minikube]
	I1201 20:09:01.737971  363421 provision.go:177] copyRemoteCerts
	I1201 20:09:01.738050  363421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:09:01.738109  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:01.762885  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:01.874168  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:09:01.893977  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1201 20:09:01.912032  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:09:01.930036  363421 provision.go:87] duration metric: took 350.392221ms to configureAuth
	I1201 20:09:01.930066  363421 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:09:01.930245  363421 config.go:182] Loaded profile config "default-k8s-diff-port-009682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:09:01.930379  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:01.950447  363421 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:01.950661  363421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1201 20:09:01.950679  363421 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:09:02.295040  363421 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:09:02.295063  363421 machine.go:97] duration metric: took 4.230905038s to provisionDockerMachine
	I1201 20:09:02.295074  363421 start.go:293] postStartSetup for "default-k8s-diff-port-009682" (driver="docker")
	I1201 20:09:02.295086  363421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:09:02.295140  363421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:09:02.295192  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:02.314605  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:02.417273  363421 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:09:02.420863  363421 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:09:02.420886  363421 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:09:02.420897  363421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:09:02.420943  363421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:09:02.421012  363421 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:09:02.421096  363421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:09:02.429052  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:02.447160  363421 start.go:296] duration metric: took 152.072363ms for postStartSetup
	I1201 20:09:02.447237  363421 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:09:02.447272  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:02.467442  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:08:59.406942  358766 out.go:252]   - Booting up control plane ...
	I1201 20:08:59.407069  358766 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 20:08:59.407186  358766 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 20:08:59.407725  358766 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 20:08:59.421400  358766 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 20:08:59.421548  358766 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 20:08:59.429946  358766 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 20:08:59.430243  358766 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 20:08:59.430328  358766 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 20:08:59.525457  358766 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 20:08:59.525628  358766 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 20:09:00.027176  358766 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.895523ms
	I1201 20:09:00.029992  358766 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 20:09:00.030115  358766 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1201 20:09:00.030278  358766 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 20:09:00.030365  358766 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 20:09:01.034944  358766 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004762142s
	I1201 20:09:01.771813  358766 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.741647999s
	W1201 20:08:59.205004  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	W1201 20:09:01.709711  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	I1201 20:09:03.531458  358766 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501373313s
	I1201 20:09:03.549804  358766 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1201 20:09:03.560547  358766 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1201 20:09:03.570543  358766 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1201 20:09:03.570792  358766 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-456990 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1201 20:09:03.579453  358766 kubeadm.go:319] [bootstrap-token] Using token: t6nth9.1dme03npps7xtqxg
	I1201 20:09:02.564699  363421 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:09:02.569398  363421 fix.go:56] duration metric: took 4.843608039s for fixHost
	I1201 20:09:02.569438  363421 start.go:83] releasing machines lock for "default-k8s-diff-port-009682", held for 4.843675394s
	I1201 20:09:02.569512  363421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:09:02.588215  363421 ssh_runner.go:195] Run: cat /version.json
	I1201 20:09:02.588256  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:02.588344  363421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:09:02.588479  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:02.607456  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:02.607749  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:02.769630  363421 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:02.777217  363421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:09:02.819594  363421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:09:02.825242  363421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:09:02.825319  363421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:09:02.834483  363421 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:09:02.834510  363421 start.go:496] detecting cgroup driver to use...
	I1201 20:09:02.834562  363421 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:09:02.834631  363421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:09:02.850900  363421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:09:02.866607  363421 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:09:02.866666  363421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:09:02.885043  363421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:09:02.900602  363421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:09:03.001146  363421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:09:03.104903  363421 docker.go:234] disabling docker service ...
	I1201 20:09:03.104982  363421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:09:03.121947  363421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:09:03.139525  363421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:09:03.252507  363421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:09:03.356626  363421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:09:03.369483  363421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:09:03.383959  363421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:09:03.384018  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.392886  363421 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:09:03.392948  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.402431  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.411640  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.422189  363421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:09:03.432194  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.441678  363421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.450620  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.460183  363421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:09:03.467584  363421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:09:03.475047  363421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:03.567439  363421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:09:03.699774  363421 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:09:03.699841  363421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:09:03.704895  363421 start.go:564] Will wait 60s for crictl version
	I1201 20:09:03.704954  363421 ssh_runner.go:195] Run: which crictl
	I1201 20:09:03.708839  363421 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:09:03.734207  363421 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:09:03.734306  363421 ssh_runner.go:195] Run: crio --version
	I1201 20:09:03.768401  363421 ssh_runner.go:195] Run: crio --version
	I1201 20:09:03.804334  363421 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 20:09:03.580798  358766 out.go:252]   - Configuring RBAC rules ...
	I1201 20:09:03.580985  358766 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1201 20:09:03.585627  358766 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1201 20:09:03.591157  358766 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1201 20:09:03.594557  358766 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1201 20:09:03.596997  358766 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1201 20:09:03.599538  358766 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1201 20:09:03.937260  358766 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1201 20:09:04.355604  358766 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1201 20:09:04.940044  358766 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1201 20:09:04.942081  358766 kubeadm.go:319] 
	I1201 20:09:04.942162  358766 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1201 20:09:04.942172  358766 kubeadm.go:319] 
	I1201 20:09:04.942247  358766 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1201 20:09:04.942273  358766 kubeadm.go:319] 
	I1201 20:09:04.942326  358766 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1201 20:09:04.942401  358766 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1201 20:09:04.942553  358766 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1201 20:09:04.942579  358766 kubeadm.go:319] 
	I1201 20:09:04.942671  358766 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1201 20:09:04.942684  358766 kubeadm.go:319] 
	I1201 20:09:04.942747  358766 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1201 20:09:04.942757  358766 kubeadm.go:319] 
	I1201 20:09:04.942813  358766 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1201 20:09:04.942933  358766 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1201 20:09:04.943117  358766 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1201 20:09:04.943129  358766 kubeadm.go:319] 
	I1201 20:09:04.943301  358766 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1201 20:09:04.943409  358766 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1201 20:09:04.943415  358766 kubeadm.go:319] 
	I1201 20:09:04.943527  358766 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token t6nth9.1dme03npps7xtqxg \
	I1201 20:09:04.943664  358766 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:06dc444a5ab7086c8ab7763a4467d2ea9bcadb3b2da24d1940c0cca14b3cdc8a \
	I1201 20:09:04.943691  358766 kubeadm.go:319] 	--control-plane 
	I1201 20:09:04.943696  358766 kubeadm.go:319] 
	I1201 20:09:04.943806  358766 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1201 20:09:04.943811  358766 kubeadm.go:319] 
	I1201 20:09:04.943935  358766 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token t6nth9.1dme03npps7xtqxg \
	I1201 20:09:04.944090  358766 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:06dc444a5ab7086c8ab7763a4467d2ea9bcadb3b2da24d1940c0cca14b3cdc8a 
	I1201 20:09:04.950014  358766 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1201 20:09:04.950166  358766 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 20:09:04.950195  358766 cni.go:84] Creating CNI manager for ""
	I1201 20:09:04.950204  358766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:04.952428  358766 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1201 20:09:03.805467  363421 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-009682 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:09:03.823590  363421 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1201 20:09:03.827746  363421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:03.838256  363421 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:09:03.838431  363421 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:09:03.838501  363421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:09:03.872004  363421 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:09:03.872030  363421 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:09:03.872101  363421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:09:03.903038  363421 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:09:03.903064  363421 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:09:03.903073  363421 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1201 20:09:03.903222  363421 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-009682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:09:03.903358  363421 ssh_runner.go:195] Run: crio config
	I1201 20:09:03.959717  363421 cni.go:84] Creating CNI manager for ""
	I1201 20:09:03.959751  363421 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:03.959774  363421 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:09:03.959806  363421 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-009682 NodeName:default-k8s-diff-port-009682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:09:03.959960  363421 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-009682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:09:03.960038  363421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:09:03.970035  363421 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:09:03.970088  363421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:09:03.981115  363421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1201 20:09:03.997387  363421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:09:04.013157  363421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1201 20:09:04.026334  363421 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:09:04.029983  363421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:04.040473  363421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:04.126425  363421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:04.157017  363421 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682 for IP: 192.168.103.2
	I1201 20:09:04.157048  363421 certs.go:195] generating shared ca certs ...
	I1201 20:09:04.157075  363421 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:04.157268  363421 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:09:04.157363  363421 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:09:04.157388  363421 certs.go:257] generating profile certs ...
	I1201 20:09:04.157486  363421 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.key
	I1201 20:09:04.157547  363421 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key.6e926564
	I1201 20:09:04.157582  363421 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key
	I1201 20:09:04.157719  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:09:04.157763  363421 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:09:04.157774  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:09:04.157807  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:09:04.157844  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:09:04.157878  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:09:04.157927  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:04.158666  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:09:04.181431  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:09:04.214841  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:09:04.239463  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:09:04.265930  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1201 20:09:04.285322  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:09:04.302994  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:09:04.322040  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:09:04.343997  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:09:04.366089  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:09:04.385828  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:09:04.403981  363421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:09:04.416528  363421 ssh_runner.go:195] Run: openssl version
	I1201 20:09:04.423168  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:09:04.431851  363421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:09:04.435576  363421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:09:04.435634  363421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:09:04.472014  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:09:04.480631  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:09:04.489567  363421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:04.493837  363421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:04.493903  363421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:04.529237  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:09:04.538935  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:09:04.547861  363421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:09:04.551700  363421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:09:04.551759  363421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:09:04.587866  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:09:04.597205  363421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:09:04.600927  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:09:04.636786  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:09:04.673583  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:09:04.727932  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:09:04.773666  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:09:04.824841  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:09:04.870082  363421 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:04.870188  363421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:09:04.870248  363421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:09:04.900068  363421 cri.go:89] found id: "ef4ba8d77dd0e9071c7b175fb62f22f9aa86ca30b16bb6d7363c6dc686aac62e"
	I1201 20:09:04.900091  363421 cri.go:89] found id: "b15229721c1e0a47f1f11b128c387218e176a2618444bdeec996eb0d113098d4"
	I1201 20:09:04.900105  363421 cri.go:89] found id: "a1e60ba95082677ce609ab21f3eb49bcc9e9c4f2b4507d8317ccd30fb12c9a8d"
	I1201 20:09:04.900111  363421 cri.go:89] found id: "c037673fa52f79aa510971b202ef75f7b96fdef9c3fc063c32e8c7ef0d11996a"
	I1201 20:09:04.900115  363421 cri.go:89] found id: ""
	I1201 20:09:04.900169  363421 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:09:04.915170  363421 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:04Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:04.915380  363421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:09:04.924568  363421 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:09:04.924589  363421 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:09:04.924636  363421 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:09:04.933995  363421 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:09:04.935868  363421 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-009682" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:04.936660  363421 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-009682" cluster setting kubeconfig missing "default-k8s-diff-port-009682" context setting]
	I1201 20:09:04.937981  363421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:04.940402  363421 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:09:04.953428  363421 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1201 20:09:04.953484  363421 kubeadm.go:602] duration metric: took 28.88936ms to restartPrimaryControlPlane
	I1201 20:09:04.953496  363421 kubeadm.go:403] duration metric: took 83.422203ms to StartCluster
	I1201 20:09:04.953514  363421 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:04.953648  363421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:04.956713  363421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:04.957022  363421 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:09:04.957280  363421 config.go:182] Loaded profile config "default-k8s-diff-port-009682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:09:04.957337  363421 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:09:04.957414  363421 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-009682"
	I1201 20:09:04.957431  363421 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-009682"
	W1201 20:09:04.957439  363421 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:09:04.957463  363421 host.go:66] Checking if "default-k8s-diff-port-009682" exists ...
	I1201 20:09:04.957965  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:04.958147  363421 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-009682"
	I1201 20:09:04.958169  363421 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-009682"
	W1201 20:09:04.958178  363421 addons.go:248] addon dashboard should already be in state true
	I1201 20:09:04.958205  363421 host.go:66] Checking if "default-k8s-diff-port-009682" exists ...
	I1201 20:09:04.958327  363421 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-009682"
	I1201 20:09:04.958364  363421 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-009682"
	I1201 20:09:04.958736  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:04.958772  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:04.962452  363421 out.go:179] * Verifying Kubernetes components...
	W1201 20:09:03.005710  352497 pod_ready.go:104] pod "coredns-7d764666f9-6kzhv" is not "Ready", error: <nil>
	I1201 20:09:03.508253  352497 pod_ready.go:94] pod "coredns-7d764666f9-6kzhv" is "Ready"
	I1201 20:09:03.508310  352497 pod_ready.go:86] duration metric: took 31.508797646s for pod "coredns-7d764666f9-6kzhv" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.512451  352497 pod_ready.go:83] waiting for pod "etcd-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.517492  352497 pod_ready.go:94] pod "etcd-no-preload-240359" is "Ready"
	I1201 20:09:03.517514  352497 pod_ready.go:86] duration metric: took 5.040457ms for pod "etcd-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.519719  352497 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.523698  352497 pod_ready.go:94] pod "kube-apiserver-no-preload-240359" is "Ready"
	I1201 20:09:03.523718  352497 pod_ready.go:86] duration metric: took 3.972027ms for pod "kube-apiserver-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.525515  352497 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.703526  352497 pod_ready.go:94] pod "kube-controller-manager-no-preload-240359" is "Ready"
	I1201 20:09:03.703559  352497 pod_ready.go:86] duration metric: took 178.021828ms for pod "kube-controller-manager-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.903891  352497 pod_ready.go:83] waiting for pod "kube-proxy-zbbsb" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:04.304197  352497 pod_ready.go:94] pod "kube-proxy-zbbsb" is "Ready"
	I1201 20:09:04.304226  352497 pod_ready.go:86] duration metric: took 400.309563ms for pod "kube-proxy-zbbsb" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:04.503580  352497 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:04.906218  352497 pod_ready.go:94] pod "kube-scheduler-no-preload-240359" is "Ready"
	I1201 20:09:04.906257  352497 pod_ready.go:86] duration metric: took 402.653219ms for pod "kube-scheduler-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:04.906272  352497 pod_ready.go:40] duration metric: took 32.911773572s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:09:04.968561  352497 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:09:04.971433  352497 out.go:179] * Done! kubectl is now configured to use "no-preload-240359" cluster and "default" namespace by default
	I1201 20:09:04.964059  363421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:04.995900  363421 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:09:04.997174  363421 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:09:04.997209  363421 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:09:04.998860  363421 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:04.998888  363421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:09:04.998905  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1201 20:09:04.998920  363421 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1201 20:09:04.998954  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:04.998983  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:04.999136  363421 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-009682"
	W1201 20:09:04.999150  363421 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:09:04.999178  363421 host.go:66] Checking if "default-k8s-diff-port-009682" exists ...
	I1201 20:09:04.999898  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:05.045128  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:05.046144  363421 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:05.046164  363421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:09:05.046223  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:05.057331  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:05.078989  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:05.178412  363421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:05.205272  363421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:05.209155  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1201 20:09:05.209177  363421 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1201 20:09:05.212411  363421 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-009682" to be "Ready" ...
	I1201 20:09:05.224200  363421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:05.235326  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1201 20:09:05.235354  363421 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1201 20:09:05.271440  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1201 20:09:05.271468  363421 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1201 20:09:05.298205  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1201 20:09:05.298230  363421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1201 20:09:05.323776  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1201 20:09:05.323811  363421 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1201 20:09:05.348888  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1201 20:09:05.348939  363421 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1201 20:09:05.368484  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1201 20:09:05.368507  363421 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1201 20:09:05.396227  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1201 20:09:05.396254  363421 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1201 20:09:05.428995  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:05.429022  363421 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1201 20:09:05.460762  363421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:06.771032  363421 node_ready.go:49] node "default-k8s-diff-port-009682" is "Ready"
	I1201 20:09:06.771062  363421 node_ready.go:38] duration metric: took 1.558615333s for node "default-k8s-diff-port-009682" to be "Ready" ...
	I1201 20:09:06.771088  363421 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:09:06.771140  363421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:09:07.343358  363421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.137932602s)
	I1201 20:09:07.343426  363421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.119182243s)
	I1201 20:09:07.343531  363421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.8827376s)
	I1201 20:09:07.343581  363421 api_server.go:72] duration metric: took 2.386523736s to wait for apiserver process to appear ...
	I1201 20:09:07.343592  363421 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:09:07.343666  363421 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1201 20:09:07.344907  363421 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-009682 addons enable metrics-server
	
	I1201 20:09:07.349323  363421 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1201 20:09:07.349428  363421 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:07.349453  363421 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:07.350469  363421 addons.go:530] duration metric: took 2.393112876s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1201 20:09:04.953734  358766 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1201 20:09:04.963206  358766 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1201 20:09:04.963275  358766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1201 20:09:04.985147  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1201 20:09:05.377592  358766 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1201 20:09:05.377721  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:05.377810  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-456990 minikube.k8s.io/updated_at=2025_12_01T20_09_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9 minikube.k8s.io/name=newest-cni-456990 minikube.k8s.io/primary=true
	I1201 20:09:05.396988  358766 ops.go:34] apiserver oom_adj: -16
	I1201 20:09:05.488649  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:05.988848  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:06.488812  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:06.989508  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:07.489212  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1201 20:09:04.208582  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	W1201 20:09:06.704348  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	I1201 20:09:07.988921  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:08.488969  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:08.989421  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:09.489572  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:09.556897  358766 kubeadm.go:1114] duration metric: took 4.179215442s to wait for elevateKubeSystemPrivileges
	I1201 20:09:09.556925  358766 kubeadm.go:403] duration metric: took 12.057888116s to StartCluster
	I1201 20:09:09.556942  358766 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:09.557018  358766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:09.561139  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:09.561442  358766 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:09:09.561528  358766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1201 20:09:09.561526  358766 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:09:09.561616  358766 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456990"
	I1201 20:09:09.561635  358766 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456990"
	I1201 20:09:09.561663  358766 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:09.561697  358766 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456990"
	I1201 20:09:09.561707  358766 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:09.561716  358766 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456990"
	I1201 20:09:09.562001  358766 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:09.562203  358766 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:09.563633  358766 out.go:179] * Verifying Kubernetes components...
	I1201 20:09:09.565146  358766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:09.585957  358766 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456990"
	I1201 20:09:09.585993  358766 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:09.586354  358766 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:09.589424  358766 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:09:09.590905  358766 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:09.590926  358766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:09:09.590986  358766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:09.620117  358766 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:09.620141  358766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:09:09.620204  358766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:09.622564  358766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:09.643761  358766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:09.651851  358766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1201 20:09:09.707356  358766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:09.735698  358766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:09.774797  358766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:09.833543  358766 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1201 20:09:09.835322  358766 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:09:09.835378  358766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:09:10.044020  358766 api_server.go:72] duration metric: took 482.54493ms to wait for apiserver process to appear ...
	I1201 20:09:10.044048  358766 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:09:10.044066  358766 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:10.048749  358766 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1201 20:09:10.049558  358766 api_server.go:141] control plane version: v1.35.0-beta.0
	I1201 20:09:10.049578  358766 api_server.go:131] duration metric: took 5.523573ms to wait for apiserver health ...
	I1201 20:09:10.049586  358766 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:09:10.050178  358766 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1201 20:09:10.051821  358766 addons.go:530] duration metric: took 490.303553ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1201 20:09:10.052035  358766 system_pods.go:59] 8 kube-system pods found
	I1201 20:09:10.052063  358766 system_pods.go:61] "coredns-7d764666f9-6t6ld" [f432ca97-c9f1-42a0-999c-c7b0c90658c1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:10.052076  358766 system_pods.go:61] "etcd-newest-cni-456990" [4ab9e88c-f019-49cb-b3b4-0ca5fe01e5bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:09:10.052094  358766 system_pods.go:61] "kindnet-gbbwm" [7386a806-e262-4de4-827f-fcc08a786840] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1201 20:09:10.052103  358766 system_pods.go:61] "kube-apiserver-newest-cni-456990" [f3b68723-7bb4-4725-9863-334f5bb8e2ac] Running
	I1201 20:09:10.052117  358766 system_pods.go:61] "kube-controller-manager-newest-cni-456990" [105b14f4-dc98-400c-b035-c01fff9181ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:09:10.052128  358766 system_pods.go:61] "kube-proxy-gmbzw" [b60069ca-4117-475a-9a2f-5ecd18fca600] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1201 20:09:10.052138  358766 system_pods.go:61] "kube-scheduler-newest-cni-456990" [d4eea582-e65e-440d-9d3e-05c34228b6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:09:10.052148  358766 system_pods.go:61] "storage-provisioner" [7a437438-9384-461e-9867-0fadcabcfea6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:10.052158  358766 system_pods.go:74] duration metric: took 2.56626ms to wait for pod list to return data ...
	I1201 20:09:10.052170  358766 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:09:10.054122  358766 default_sa.go:45] found service account: "default"
	I1201 20:09:10.054138  358766 default_sa.go:55] duration metric: took 1.961704ms for default service account to be created ...
	I1201 20:09:10.054150  358766 kubeadm.go:587] duration metric: took 492.678996ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:10.054169  358766 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:09:10.056013  358766 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:09:10.056034  358766 node_conditions.go:123] node cpu capacity is 8
	I1201 20:09:10.056055  358766 node_conditions.go:105] duration metric: took 1.88044ms to run NodePressure ...
	I1201 20:09:10.056067  358766 start.go:242] waiting for startup goroutines ...
	I1201 20:09:10.338257  358766 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-456990" context rescaled to 1 replicas
	I1201 20:09:10.338330  358766 start.go:247] waiting for cluster config update ...
	I1201 20:09:10.338346  358766 start.go:256] writing updated cluster config ...
	I1201 20:09:10.338608  358766 ssh_runner.go:195] Run: rm -f paused
	I1201 20:09:10.395956  358766 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:09:10.398166  358766 out.go:179] * Done! kubectl is now configured to use "newest-cni-456990" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 01 20:09:00 newest-cni-456990 crio[774]: time="2025-12-01T20:09:00.481831063Z" level=info msg="Started container" PID=2151 containerID=530fe8fdbb0fc5a8e192af8db90e4a0936c3d430a34038af03d886a88f5bd178 description=kube-system/kube-controller-manager-newest-cni-456990/kube-controller-manager id=baf016cd-a4ba-4847-b4e8-cc11eb040f14 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00c05bfbac8b9d557146d36dae77ce4229c8f724ee3decbad0b726020129e86e
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.087435531Z" level=info msg="Running pod sandbox: kube-system/kindnet-gbbwm/POD" id=8974a3cc-e692-4777-b759-ec36f07f9f17 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.087509401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.089451589Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-gmbzw/POD" id=464581a6-917e-41a7-8250-b90ec4f51ef9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.089520421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.095095677Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8974a3cc-e692-4777-b759-ec36f07f9f17 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.096009688Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=464581a6-917e-41a7-8250-b90ec4f51ef9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.097115902Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.097696207Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.097821386Z" level=info msg="Ran pod sandbox 8f4bffc447af2d604f9e0128553d314da9eee84f7671e8e2c9814440a12e9b63 with infra container: kube-system/kindnet-gbbwm/POD" id=8974a3cc-e692-4777-b759-ec36f07f9f17 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.098256236Z" level=info msg="Ran pod sandbox 35446d9df8c652a93fd73f7803ddb7d64ed95d1bfa875a05a4c103ac8d5e1bcf with infra container: kube-system/kube-proxy-gmbzw/POD" id=464581a6-917e-41a7-8250-b90ec4f51ef9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.09899383Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c2c54dce-c1da-40da-ba23-0e96adc87d01 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.09911157Z" level=info msg="Image docker.io/kindest/kindnetd:v20250512-df8de77b not found" id=c2c54dce-c1da-40da-ba23-0e96adc87d01 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.099157515Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20250512-df8de77b found" id=c2c54dce-c1da-40da-ba23-0e96adc87d01 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.099275185Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=00c7843f-34e0-4a23-a8a2-d1516f13891c name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.100126152Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e69fdc71-be93-4385-8ee4-8111738534c3 name=/runtime.v1.ImageService/PullImage
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.100318193Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=98162533-fdaf-47fa-8e10-e029560e19d9 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.103818102Z" level=info msg="Creating container: kube-system/kube-proxy-gmbzw/kube-proxy" id=ab07d1a7-e12e-4f35-90a6-fc222055a310 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.103933101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.104692329Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250512-df8de77b\""
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.107644071Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.108219845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.137558446Z" level=info msg="Created container 2ed69a1616b89f7162e3139251d2bf175d1e78af8ed93f462a722c8eace53485: kube-system/kube-proxy-gmbzw/kube-proxy" id=ab07d1a7-e12e-4f35-90a6-fc222055a310 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.138213932Z" level=info msg="Starting container: 2ed69a1616b89f7162e3139251d2bf175d1e78af8ed93f462a722c8eace53485" id=3e65a7c8-93eb-4630-9358-b9eb0c6f886e name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:10 newest-cni-456990 crio[774]: time="2025-12-01T20:09:10.140758405Z" level=info msg="Started container" PID=2505 containerID=2ed69a1616b89f7162e3139251d2bf175d1e78af8ed93f462a722c8eace53485 description=kube-system/kube-proxy-gmbzw/kube-proxy id=3e65a7c8-93eb-4630-9358-b9eb0c6f886e name=/runtime.v1.RuntimeService/StartContainer sandboxID=35446d9df8c652a93fd73f7803ddb7d64ed95d1bfa875a05a4c103ac8d5e1bcf
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2ed69a1616b89       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   1 second ago        Running             kube-proxy                0                   35446d9df8c65       kube-proxy-gmbzw                            kube-system
	530fe8fdbb0fc       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   11 seconds ago      Running             kube-controller-manager   0                   00c05bfbac8b9       kube-controller-manager-newest-cni-456990   kube-system
	cd4ebfb44ee7e       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   11 seconds ago      Running             kube-scheduler            0                   0f0dcce573f01       kube-scheduler-newest-cni-456990            kube-system
	43803a254fc03       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   11 seconds ago      Running             etcd                      0                   c2abf7ceabc2f       etcd-newest-cni-456990                      kube-system
	3435977e2e6c1       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   11 seconds ago      Running             kube-apiserver            0                   5443b2e9a522b       kube-apiserver-newest-cni-456990            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-456990
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-456990
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=newest-cni-456990
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_09_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:09:01 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-456990
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:09:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:09:04 +0000   Mon, 01 Dec 2025 20:09:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:09:04 +0000   Mon, 01 Dec 2025 20:09:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:09:04 +0000   Mon, 01 Dec 2025 20:09:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 01 Dec 2025 20:09:04 +0000   Mon, 01 Dec 2025 20:09:00 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-456990
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                725bbd5a-64fb-4dec-99aa-76f4e9244e2a
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-456990                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-gbbwm                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-456990             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-456990    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-gmbzw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-456990             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-456990 event: Registered Node newest-cni-456990 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [43803a254fc03f4c513f350c5b0c7017a26aa99b58a211bcdb07b98732a5618a] <==
	{"level":"warn","ts":"2025-12-01T20:09:01.075458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.085812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.093154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.101036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.108119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.115982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.122607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.129175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.137392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.144508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.151946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.159118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.166737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.172929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.179540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.186346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.193651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.200437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.207381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.214255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.227417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.234857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.242578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.254009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:01.300200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54752","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:09:12 up  1:51,  0 user,  load average: 4.09, 3.42, 2.39
	Linux newest-cni-456990 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [3435977e2e6c1a2668056084125d6ab3cb9f570e217f453d3305c8e228725c33] <==
	I1201 20:09:01.811455       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1201 20:09:01.811568       1 aggregator.go:187] initial CRD sync complete...
	I1201 20:09:01.811619       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 20:09:01.811629       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:09:01.811637       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:09:01.816614       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1201 20:09:01.822349       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:09:02.009613       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:09:02.715387       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1201 20:09:02.724223       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1201 20:09:02.724241       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1201 20:09:03.268477       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:09:03.313048       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:09:03.418726       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1201 20:09:03.425181       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1201 20:09:03.426535       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:09:03.430990       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:09:03.745237       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:09:04.344683       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:09:04.354681       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1201 20:09:04.362568       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1201 20:09:09.446803       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:09:09.651368       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:09:09.655652       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:09:09.748365       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [530fe8fdbb0fc5a8e192af8db90e4a0936c3d430a34038af03d886a88f5bd178] <==
	I1201 20:09:08.556655       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.556694       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.556771       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.556824       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.556838       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.556845       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.556869       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.556899       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.557000       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.556051       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.557071       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.557142       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.557197       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.557226       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.557244       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.557405       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.557422       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.557488       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.558718       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:09:08.562578       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-456990" podCIDRs=["10.42.0.0/24"]
	I1201 20:09:08.578654       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.655820       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:08.655855       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1201 20:09:08.655860       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1201 20:09:08.659451       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [2ed69a1616b89f7162e3139251d2bf175d1e78af8ed93f462a722c8eace53485] <==
	I1201 20:09:10.177955       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:09:10.254598       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:09:10.355147       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:10.355186       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1201 20:09:10.355341       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:09:10.377097       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:09:10.377165       1 server_linux.go:136] "Using iptables Proxier"
	I1201 20:09:10.383123       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:09:10.383584       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1201 20:09:10.383655       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:09:10.385407       1 config.go:309] "Starting node config controller"
	I1201 20:09:10.385770       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:09:10.385826       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:09:10.385558       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:09:10.385858       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:09:10.385514       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:09:10.385547       1 config.go:200] "Starting service config controller"
	I1201 20:09:10.385886       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:09:10.385882       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:09:10.487701       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:09:10.487774       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 20:09:10.487942       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [cd4ebfb44ee7e96d8ffefea151863c67cba45d9557bdbb8694172e059bdc6404] <==
	E1201 20:09:02.679427       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1201 20:09:02.680382       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1201 20:09:02.712786       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1201 20:09:02.713820       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1201 20:09:02.720989       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1201 20:09:02.721928       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1201 20:09:02.723920       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1201 20:09:02.724796       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1201 20:09:02.731871       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1201 20:09:02.732919       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1201 20:09:02.803585       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1201 20:09:02.804586       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1201 20:09:02.819975       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1201 20:09:02.821087       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1201 20:09:02.854935       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1201 20:09:02.856087       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1201 20:09:02.881860       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1201 20:09:02.882953       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1201 20:09:02.922470       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1201 20:09:02.923457       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1201 20:09:03.038150       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1201 20:09:03.039396       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1201 20:09:03.069759       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1201 20:09:03.070900       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	I1201 20:09:05.966186       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 01 20:09:05 newest-cni-456990 kubelet[2229]: E1201 20:09:05.259282    2229 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-456990\" already exists" pod="kube-system/kube-apiserver-newest-cni-456990"
	Dec 01 20:09:05 newest-cni-456990 kubelet[2229]: E1201 20:09:05.259372    2229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-456990" containerName="kube-apiserver"
	Dec 01 20:09:05 newest-cni-456990 kubelet[2229]: I1201 20:09:05.281656    2229 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-456990" podStartSLOduration=1.281638231 podStartE2EDuration="1.281638231s" podCreationTimestamp="2025-12-01 20:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:09:05.279055224 +0000 UTC m=+1.166096116" watchObservedRunningTime="2025-12-01 20:09:05.281638231 +0000 UTC m=+1.168679125"
	Dec 01 20:09:05 newest-cni-456990 kubelet[2229]: I1201 20:09:05.319510    2229 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-456990" podStartSLOduration=1.319488468 podStartE2EDuration="1.319488468s" podCreationTimestamp="2025-12-01 20:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:09:05.299228543 +0000 UTC m=+1.186269435" watchObservedRunningTime="2025-12-01 20:09:05.319488468 +0000 UTC m=+1.206529361"
	Dec 01 20:09:05 newest-cni-456990 kubelet[2229]: I1201 20:09:05.334891    2229 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-456990" podStartSLOduration=1.334870399 podStartE2EDuration="1.334870399s" podCreationTimestamp="2025-12-01 20:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:09:05.320369777 +0000 UTC m=+1.207410669" watchObservedRunningTime="2025-12-01 20:09:05.334870399 +0000 UTC m=+1.221911292"
	Dec 01 20:09:05 newest-cni-456990 kubelet[2229]: I1201 20:09:05.470442    2229 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-456990" podStartSLOduration=1.470423748 podStartE2EDuration="1.470423748s" podCreationTimestamp="2025-12-01 20:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:09:05.336183656 +0000 UTC m=+1.223224550" watchObservedRunningTime="2025-12-01 20:09:05.470423748 +0000 UTC m=+1.357464654"
	Dec 01 20:09:06 newest-cni-456990 kubelet[2229]: E1201 20:09:06.231422    2229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-456990" containerName="kube-apiserver"
	Dec 01 20:09:06 newest-cni-456990 kubelet[2229]: E1201 20:09:06.231540    2229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-456990" containerName="kube-controller-manager"
	Dec 01 20:09:06 newest-cni-456990 kubelet[2229]: E1201 20:09:06.231614    2229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-456990" containerName="kube-scheduler"
	Dec 01 20:09:06 newest-cni-456990 kubelet[2229]: E1201 20:09:06.231735    2229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-456990" containerName="etcd"
	Dec 01 20:09:07 newest-cni-456990 kubelet[2229]: E1201 20:09:07.233839    2229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-456990" containerName="kube-apiserver"
	Dec 01 20:09:07 newest-cni-456990 kubelet[2229]: E1201 20:09:07.233942    2229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-456990" containerName="etcd"
	Dec 01 20:09:07 newest-cni-456990 kubelet[2229]: E1201 20:09:07.233976    2229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-456990" containerName="kube-scheduler"
	Dec 01 20:09:08 newest-cni-456990 kubelet[2229]: E1201 20:09:08.236015    2229 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-456990" containerName="kube-scheduler"
	Dec 01 20:09:08 newest-cni-456990 kubelet[2229]: I1201 20:09:08.651926    2229 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 01 20:09:08 newest-cni-456990 kubelet[2229]: I1201 20:09:08.652593    2229 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 01 20:09:09 newest-cni-456990 kubelet[2229]: I1201 20:09:09.836960    2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7386a806-e262-4de4-827f-fcc08a786840-xtables-lock\") pod \"kindnet-gbbwm\" (UID: \"7386a806-e262-4de4-827f-fcc08a786840\") " pod="kube-system/kindnet-gbbwm"
	Dec 01 20:09:09 newest-cni-456990 kubelet[2229]: I1201 20:09:09.837013    2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7386a806-e262-4de4-827f-fcc08a786840-cni-cfg\") pod \"kindnet-gbbwm\" (UID: \"7386a806-e262-4de4-827f-fcc08a786840\") " pod="kube-system/kindnet-gbbwm"
	Dec 01 20:09:09 newest-cni-456990 kubelet[2229]: I1201 20:09:09.837042    2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b60069ca-4117-475a-9a2f-5ecd18fca600-kube-proxy\") pod \"kube-proxy-gmbzw\" (UID: \"b60069ca-4117-475a-9a2f-5ecd18fca600\") " pod="kube-system/kube-proxy-gmbzw"
	Dec 01 20:09:09 newest-cni-456990 kubelet[2229]: I1201 20:09:09.837065    2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frf2j\" (UniqueName: \"kubernetes.io/projected/b60069ca-4117-475a-9a2f-5ecd18fca600-kube-api-access-frf2j\") pod \"kube-proxy-gmbzw\" (UID: \"b60069ca-4117-475a-9a2f-5ecd18fca600\") " pod="kube-system/kube-proxy-gmbzw"
	Dec 01 20:09:09 newest-cni-456990 kubelet[2229]: I1201 20:09:09.837095    2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5slj\" (UniqueName: \"kubernetes.io/projected/7386a806-e262-4de4-827f-fcc08a786840-kube-api-access-k5slj\") pod \"kindnet-gbbwm\" (UID: \"7386a806-e262-4de4-827f-fcc08a786840\") " pod="kube-system/kindnet-gbbwm"
	Dec 01 20:09:09 newest-cni-456990 kubelet[2229]: I1201 20:09:09.837115    2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b60069ca-4117-475a-9a2f-5ecd18fca600-xtables-lock\") pod \"kube-proxy-gmbzw\" (UID: \"b60069ca-4117-475a-9a2f-5ecd18fca600\") " pod="kube-system/kube-proxy-gmbzw"
	Dec 01 20:09:09 newest-cni-456990 kubelet[2229]: I1201 20:09:09.837140    2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7386a806-e262-4de4-827f-fcc08a786840-lib-modules\") pod \"kindnet-gbbwm\" (UID: \"7386a806-e262-4de4-827f-fcc08a786840\") " pod="kube-system/kindnet-gbbwm"
	Dec 01 20:09:09 newest-cni-456990 kubelet[2229]: I1201 20:09:09.837173    2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b60069ca-4117-475a-9a2f-5ecd18fca600-lib-modules\") pod \"kube-proxy-gmbzw\" (UID: \"b60069ca-4117-475a-9a2f-5ecd18fca600\") " pod="kube-system/kube-proxy-gmbzw"
	Dec 01 20:09:10 newest-cni-456990 kubelet[2229]: I1201 20:09:10.253138    2229 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-gmbzw" podStartSLOduration=1.2531200980000001 podStartE2EDuration="1.253120098s" podCreationTimestamp="2025-12-01 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 20:09:10.252939726 +0000 UTC m=+6.139980619" watchObservedRunningTime="2025-12-01 20:09:10.253120098 +0000 UTC m=+6.140160991"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-456990 -n newest-cni-456990
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-456990 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-6t6ld storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-456990 describe pod coredns-7d764666f9-6t6ld storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-456990 describe pod coredns-7d764666f9-6t6ld storage-provisioner: exit status 1 (69.001528ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-6t6ld" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-456990 describe pod coredns-7d764666f9-6t6ld storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-240359 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-240359 --alsologtostderr -v=1: exit status 80 (1.850599803s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-240359 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:09:16.986410  367810 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:09:16.986731  367810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:16.986745  367810 out.go:374] Setting ErrFile to fd 2...
	I1201 20:09:16.986751  367810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:16.987070  367810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:09:16.987427  367810 out.go:368] Setting JSON to false
	I1201 20:09:16.987450  367810 mustload.go:66] Loading cluster: no-preload-240359
	I1201 20:09:16.987938  367810 config.go:182] Loaded profile config "no-preload-240359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:16.988566  367810 cli_runner.go:164] Run: docker container inspect no-preload-240359 --format={{.State.Status}}
	I1201 20:09:17.012125  367810 host.go:66] Checking if "no-preload-240359" exists ...
	I1201 20:09:17.012541  367810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:17.082716  367810 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:88 OomKillDisable:false NGoroutines:93 SystemTime:2025-12-01 20:09:17.070737997 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:17.083686  367810 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764600683-21997/minikube-v1.37.0-1764600683-21997-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764600683-21997-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-240359 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1201 20:09:17.086378  367810 out.go:179] * Pausing node no-preload-240359 ... 
	I1201 20:09:17.087747  367810 host.go:66] Checking if "no-preload-240359" exists ...
	I1201 20:09:17.088093  367810 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:17.088143  367810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-240359
	I1201 20:09:17.112503  367810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/no-preload-240359/id_rsa Username:docker}
	I1201 20:09:17.217151  367810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:17.229380  367810 pause.go:52] kubelet running: true
	I1201 20:09:17.229465  367810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:17.417978  367810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:17.418080  367810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:17.482559  367810 cri.go:89] found id: "a765c7d4cc7dc1f4724ff9ae9c28c386601a6256b8f12a3058791b6a4f566457"
	I1201 20:09:17.482579  367810 cri.go:89] found id: "510968b59805a625501e44f964dc5dbaaeca09bb0e1fad75aead446e677e99e2"
	I1201 20:09:17.482583  367810 cri.go:89] found id: "2f185801b7d0fadbf2e0686871d2c9ac6150a3fae2b8fb8f9807e45e9254f1bf"
	I1201 20:09:17.482587  367810 cri.go:89] found id: "844ba0fcae08d61a06cb533c6dd7bc40ecb98db5d968faabcf4760594e3545c0"
	I1201 20:09:17.482590  367810 cri.go:89] found id: "de746e8ab3a57e792862aca89bb9e8210ee00df2dcb4ec56548296e6b1618ac7"
	I1201 20:09:17.482593  367810 cri.go:89] found id: "6b752f5fa5d255e1175b4bd1269edc34ac8b33b4ccd5fd8ef5ee42c1138e4140"
	I1201 20:09:17.482596  367810 cri.go:89] found id: "e49b2d4ba56ef1c2e40ddb43da58758bdbf5d919d3c69e15fb12ddd94e3859e6"
	I1201 20:09:17.482598  367810 cri.go:89] found id: "29cdf919857836c121bb0ca4a31dd8000e82c51bc59f779d45be989f90169f51"
	I1201 20:09:17.482601  367810 cri.go:89] found id: "36005a70764f454efe8261a6e2c055592d11b2995f54692acfa06be75c01e231"
	I1201 20:09:17.482606  367810 cri.go:89] found id: "cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603"
	I1201 20:09:17.482608  367810 cri.go:89] found id: "ba8a74fae657c1cb17397fdb3a557728f7746032c8530cacd94377d02328e38e"
	I1201 20:09:17.482612  367810 cri.go:89] found id: ""
	I1201 20:09:17.482655  367810 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:17.494539  367810 retry.go:31] will retry after 329.600141ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:17Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:17.825093  367810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:17.839414  367810 pause.go:52] kubelet running: false
	I1201 20:09:17.839478  367810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:18.004734  367810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:18.004831  367810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:18.076844  367810 cri.go:89] found id: "a765c7d4cc7dc1f4724ff9ae9c28c386601a6256b8f12a3058791b6a4f566457"
	I1201 20:09:18.076866  367810 cri.go:89] found id: "510968b59805a625501e44f964dc5dbaaeca09bb0e1fad75aead446e677e99e2"
	I1201 20:09:18.076871  367810 cri.go:89] found id: "2f185801b7d0fadbf2e0686871d2c9ac6150a3fae2b8fb8f9807e45e9254f1bf"
	I1201 20:09:18.076876  367810 cri.go:89] found id: "844ba0fcae08d61a06cb533c6dd7bc40ecb98db5d968faabcf4760594e3545c0"
	I1201 20:09:18.076881  367810 cri.go:89] found id: "de746e8ab3a57e792862aca89bb9e8210ee00df2dcb4ec56548296e6b1618ac7"
	I1201 20:09:18.076893  367810 cri.go:89] found id: "6b752f5fa5d255e1175b4bd1269edc34ac8b33b4ccd5fd8ef5ee42c1138e4140"
	I1201 20:09:18.076898  367810 cri.go:89] found id: "e49b2d4ba56ef1c2e40ddb43da58758bdbf5d919d3c69e15fb12ddd94e3859e6"
	I1201 20:09:18.076904  367810 cri.go:89] found id: "29cdf919857836c121bb0ca4a31dd8000e82c51bc59f779d45be989f90169f51"
	I1201 20:09:18.076908  367810 cri.go:89] found id: "36005a70764f454efe8261a6e2c055592d11b2995f54692acfa06be75c01e231"
	I1201 20:09:18.076925  367810 cri.go:89] found id: "cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603"
	I1201 20:09:18.076933  367810 cri.go:89] found id: "ba8a74fae657c1cb17397fdb3a557728f7746032c8530cacd94377d02328e38e"
	I1201 20:09:18.076938  367810 cri.go:89] found id: ""
	I1201 20:09:18.076979  367810 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:18.089968  367810 retry.go:31] will retry after 422.621325ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:18Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:18.513457  367810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:18.526673  367810 pause.go:52] kubelet running: false
	I1201 20:09:18.526744  367810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:18.673221  367810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:18.673332  367810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:18.740157  367810 cri.go:89] found id: "a765c7d4cc7dc1f4724ff9ae9c28c386601a6256b8f12a3058791b6a4f566457"
	I1201 20:09:18.740182  367810 cri.go:89] found id: "510968b59805a625501e44f964dc5dbaaeca09bb0e1fad75aead446e677e99e2"
	I1201 20:09:18.740188  367810 cri.go:89] found id: "2f185801b7d0fadbf2e0686871d2c9ac6150a3fae2b8fb8f9807e45e9254f1bf"
	I1201 20:09:18.740192  367810 cri.go:89] found id: "844ba0fcae08d61a06cb533c6dd7bc40ecb98db5d968faabcf4760594e3545c0"
	I1201 20:09:18.740195  367810 cri.go:89] found id: "de746e8ab3a57e792862aca89bb9e8210ee00df2dcb4ec56548296e6b1618ac7"
	I1201 20:09:18.740199  367810 cri.go:89] found id: "6b752f5fa5d255e1175b4bd1269edc34ac8b33b4ccd5fd8ef5ee42c1138e4140"
	I1201 20:09:18.740203  367810 cri.go:89] found id: "e49b2d4ba56ef1c2e40ddb43da58758bdbf5d919d3c69e15fb12ddd94e3859e6"
	I1201 20:09:18.740208  367810 cri.go:89] found id: "29cdf919857836c121bb0ca4a31dd8000e82c51bc59f779d45be989f90169f51"
	I1201 20:09:18.740212  367810 cri.go:89] found id: "36005a70764f454efe8261a6e2c055592d11b2995f54692acfa06be75c01e231"
	I1201 20:09:18.740220  367810 cri.go:89] found id: "cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603"
	I1201 20:09:18.740225  367810 cri.go:89] found id: "ba8a74fae657c1cb17397fdb3a557728f7746032c8530cacd94377d02328e38e"
	I1201 20:09:18.740229  367810 cri.go:89] found id: ""
	I1201 20:09:18.740274  367810 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:18.754040  367810 out.go:203] 
	W1201 20:09:18.755463  367810 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:09:18.755478  367810 out.go:285] * 
	* 
	W1201 20:09:18.759774  367810 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:09:18.761192  367810 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-240359 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-240359
helpers_test.go:243: (dbg) docker inspect no-preload-240359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340",
	        "Created": "2025-12-01T20:07:06.01914801Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 352701,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:08:21.884818011Z",
	            "FinishedAt": "2025-12-01T20:08:20.490851656Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340/hostname",
	        "HostsPath": "/var/lib/docker/containers/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340/hosts",
	        "LogPath": "/var/lib/docker/containers/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340-json.log",
	        "Name": "/no-preload-240359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-240359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-240359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340",
	                "LowerDir": "/var/lib/docker/overlay2/68d93e71280e719e54de04ab62d6ba0bb0b66e6908f589a7add78b928696e04f-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68d93e71280e719e54de04ab62d6ba0bb0b66e6908f589a7add78b928696e04f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68d93e71280e719e54de04ab62d6ba0bb0b66e6908f589a7add78b928696e04f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68d93e71280e719e54de04ab62d6ba0bb0b66e6908f589a7add78b928696e04f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-240359",
	                "Source": "/var/lib/docker/volumes/no-preload-240359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-240359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-240359",
	                "name.minikube.sigs.k8s.io": "no-preload-240359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b0930adfcc271d0647ed94570295bcb637228a0a45bd8b4334dacd5f7800b88c",
	            "SandboxKey": "/var/run/docker/netns/b0930adfcc27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-240359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9442b61c89479da474674c2efe3f782398fb10944284ed674aaa668317b06131",
	                    "EndpointID": "87076adabffbdcd6a2bcc47050d1b93ceb9df7a72a9b70f1b35d7cfb77d50b64",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6a:ac:01:cd:75:42",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-240359",
	                        "52fdbf3aa5c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-240359 -n no-preload-240359
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-240359 -n no-preload-240359: exit status 2 (317.889816ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-240359 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-240359 logs -n 25: (1.076269016s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p disable-driver-mounts-003720                                                                                                                                                                                                                      │ disable-driver-mounts-003720 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-217464 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p old-k8s-version-217464 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-240359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p no-preload-240359 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-990820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p embed-certs-990820 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p no-preload-240359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ old-k8s-version-217464 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ pause   │ -p old-k8s-version-217464 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-990820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ stop    │ -p default-k8s-diff-port-009682 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-009682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-456990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-456990 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ image   │ no-preload-240359 image list --format=json                                                                                                                                                                                                           │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p no-preload-240359 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:08:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:08:57.524741  363421 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:08:57.524856  363421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:57.524866  363421 out.go:374] Setting ErrFile to fd 2...
	I1201 20:08:57.524872  363421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:57.525166  363421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:08:57.525742  363421 out.go:368] Setting JSON to false
	I1201 20:08:57.527230  363421 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6688,"bootTime":1764613049,"procs":364,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:08:57.527326  363421 start.go:143] virtualization: kvm guest
	I1201 20:08:57.529688  363421 out.go:179] * [default-k8s-diff-port-009682] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:08:57.530978  363421 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:08:57.530985  363421 notify.go:221] Checking for updates...
	I1201 20:08:57.532313  363421 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:08:57.533552  363421 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:57.534766  363421 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:08:57.535947  363421 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:08:57.537115  363421 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:08:57.538758  363421 config.go:182] Loaded profile config "default-k8s-diff-port-009682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:08:57.539252  363421 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:08:57.564657  363421 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:08:57.564748  363421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:57.627789  363421 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:83 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-01 20:08:57.613982153 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:57.627885  363421 docker.go:319] overlay module found
	I1201 20:08:57.629736  363421 out.go:179] * Using the docker driver based on existing profile
	I1201 20:08:57.630805  363421 start.go:309] selected driver: docker
	I1201 20:08:57.630817  363421 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:57.630891  363421 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:08:57.631486  363421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:57.694034  363421 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-01 20:08:57.682818846 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:57.694423  363421 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:08:57.694466  363421 cni.go:84] Creating CNI manager for ""
	I1201 20:08:57.694533  363421 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:57.694577  363421 start.go:353] cluster config:
	{Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:57.696647  363421 out.go:179] * Starting "default-k8s-diff-port-009682" primary control-plane node in "default-k8s-diff-port-009682" cluster
	I1201 20:08:57.697915  363421 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:08:57.699088  363421 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:08:54.033979  358766 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.451822367s)
	I1201 20:08:54.034006  358766 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1201 20:08:54.034040  358766 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1201 20:08:54.034079  358766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1201 20:08:55.285959  358766 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.251855407s)
	I1201 20:08:55.285986  358766 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1201 20:08:55.286009  358766 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1201 20:08:55.286056  358766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1201 20:08:55.835833  358766 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1201 20:08:55.835878  358766 cache_images.go:125] Successfully loaded all cached images
	I1201 20:08:55.835887  358766 cache_images.go:94] duration metric: took 9.220203533s to LoadCachedImages
	I1201 20:08:55.835902  358766 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:08:55.836000  358766 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-456990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:08:55.836092  358766 ssh_runner.go:195] Run: crio config
	I1201 20:08:55.882185  358766 cni.go:84] Creating CNI manager for ""
	I1201 20:08:55.882204  358766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:55.882221  358766 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1201 20:08:55.882240  358766 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456990 NodeName:newest-cni-456990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:08:55.882388  358766 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-456990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:08:55.882456  358766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:08:55.896478  358766 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1201 20:08:55.896542  358766 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:08:55.905428  358766 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1201 20:08:55.905471  358766 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1201 20:08:55.905478  358766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:08:55.905492  358766 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1201 20:08:55.905548  358766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1201 20:08:55.905560  358766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1201 20:08:55.924100  358766 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1201 20:08:55.924135  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1201 20:08:55.924162  358766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1201 20:08:55.924163  358766 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1201 20:08:55.924196  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1201 20:08:55.931240  358766 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1201 20:08:55.931269  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1201 20:08:56.484601  358766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:08:56.493223  358766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:08:56.506733  358766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:08:56.551910  358766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1201 20:08:56.565479  358766 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:08:56.569659  358766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:08:56.674504  358766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:08:56.766035  358766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:08:56.790444  358766 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990 for IP: 192.168.76.2
	I1201 20:08:56.790466  358766 certs.go:195] generating shared ca certs ...
	I1201 20:08:56.790488  358766 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:56.790666  358766 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:08:56.790711  358766 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:08:56.790722  358766 certs.go:257] generating profile certs ...
	I1201 20:08:56.790775  358766 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key
	I1201 20:08:56.790787  358766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.crt with IP's: []
	I1201 20:08:56.856182  358766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.crt ...
	I1201 20:08:56.856207  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.crt: {Name:mk188d1d1ba3b1359a8c4c959ae5d3c192a20a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:56.856394  358766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key ...
	I1201 20:08:56.856408  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key: {Name:mkb94c2da30d31143505840f4576d1cd1a4db927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:56.856490  358766 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757
	I1201 20:08:56.856504  358766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt.79f10757 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1201 20:08:57.050302  358766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt.79f10757 ...
	I1201 20:08:57.050328  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt.79f10757: {Name:mkeefb489f4b625e46090918386fdc47c61b5f6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:57.050500  358766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757 ...
	I1201 20:08:57.050517  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757: {Name:mkf596c61e744a065cd8401e41d8e454de70b079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:57.050632  358766 certs.go:382] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt.79f10757 -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt
	I1201 20:08:57.050717  358766 certs.go:386] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757 -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key
	I1201 20:08:57.050771  358766 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key
	I1201 20:08:57.050786  358766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt with IP's: []
	I1201 20:08:57.090707  358766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt ...
	I1201 20:08:57.090730  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt: {Name:mk173cd6fe67eab6f70384a04dff60d8ad263813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:57.090894  358766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key ...
	I1201 20:08:57.090908  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key: {Name:mk07102f58d64e403b75622a5498a55b5a7d2682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:57.091078  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:08:57.091119  358766 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:08:57.091129  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:08:57.091155  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:08:57.091178  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:08:57.091204  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:08:57.091249  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:08:57.091846  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:08:57.110296  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:08:57.127543  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:08:57.145135  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:08:57.161965  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:08:57.178832  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:08:57.196202  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:08:57.216297  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:08:57.235646  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:08:57.255802  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:08:57.274205  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:08:57.291845  358766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:08:57.305221  358766 ssh_runner.go:195] Run: openssl version
	I1201 20:08:57.311715  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:08:57.321501  358766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:08:57.325823  358766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:08:57.325889  358766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:08:57.365528  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:08:57.375267  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:08:57.384499  358766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:57.388796  358766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:57.388853  358766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:57.427537  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:08:57.436653  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:08:57.446332  358766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:08:57.450883  358766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:08:57.450941  358766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:08:57.485407  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:08:57.494810  358766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:08:57.498985  358766 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 20:08:57.499041  358766 kubeadm.go:401] StartCluster: {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:57.499130  358766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:08:57.499181  358766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:08:57.528197  358766 cri.go:89] found id: ""
	I1201 20:08:57.528247  358766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:08:57.536955  358766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:08:57.545150  358766 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 20:08:57.545217  358766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:08:57.553840  358766 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 20:08:57.553872  358766 kubeadm.go:158] found existing configuration files:
	
	I1201 20:08:57.553923  358766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 20:08:57.562547  358766 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 20:08:57.562603  358766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 20:08:57.570825  358766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 20:08:57.579016  358766 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 20:08:57.579104  358766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:08:57.588155  358766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 20:08:57.598007  358766 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 20:08:57.598081  358766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:08:57.607460  358766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 20:08:57.616501  358766 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 20:08:57.616576  358766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:08:57.625112  358766 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 20:08:57.668430  358766 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 20:08:57.668522  358766 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 20:08:57.700560  363421 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:08:57.700599  363421 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 20:08:57.700606  363421 cache.go:65] Caching tarball of preloaded images
	I1201 20:08:57.700646  363421 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:08:57.700699  363421 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:08:57.700709  363421 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:08:57.700830  363421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/config.json ...
	I1201 20:08:57.725595  363421 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:08:57.725622  363421 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:08:57.725643  363421 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:08:57.725678  363421 start.go:360] acquireMachinesLock for default-k8s-diff-port-009682: {Name:mk42586c39f050856fb58aa29e83d0a77c4546b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:08:57.725749  363421 start.go:364] duration metric: took 47.794µs to acquireMachinesLock for "default-k8s-diff-port-009682"
	I1201 20:08:57.725771  363421 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:08:57.725786  363421 fix.go:54] fixHost starting: 
	I1201 20:08:57.726056  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:08:57.747795  363421 fix.go:112] recreateIfNeeded on default-k8s-diff-port-009682: state=Stopped err=<nil>
	W1201 20:08:57.747827  363421 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 20:08:57.757685  358766 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 20:08:57.757794  358766 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1201 20:08:57.757867  358766 kubeadm.go:319] OS: Linux
	I1201 20:08:57.757937  358766 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 20:08:57.758000  358766 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 20:08:57.758103  358766 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 20:08:57.758195  358766 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 20:08:57.758280  358766 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 20:08:57.758368  358766 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 20:08:57.758454  358766 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 20:08:57.758515  358766 kubeadm.go:319] CGROUPS_IO: enabled
	I1201 20:08:57.824201  358766 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 20:08:57.824361  358766 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 20:08:57.824478  358766 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 20:08:57.839908  358766 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1201 20:08:54.705077  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	W1201 20:08:57.204772  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	I1201 20:08:57.842269  358766 out.go:252]   - Generating certificates and keys ...
	I1201 20:08:57.842407  358766 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 20:08:57.842551  358766 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 20:08:57.881252  358766 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 20:08:58.037461  358766 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 20:08:58.107548  358766 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 20:08:58.187232  358766 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 20:08:58.505054  358766 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 20:08:58.505252  358766 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456990] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1201 20:08:58.539384  358766 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 20:08:58.539557  358766 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456990] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1201 20:08:58.601325  358766 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 20:08:58.651270  358766 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 20:08:58.937961  358766 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 20:08:58.938159  358766 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 20:08:59.070341  358766 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 20:08:59.130405  358766 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 20:08:59.174058  358766 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 20:08:59.235555  358766 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 20:08:59.401392  358766 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 20:08:59.401904  358766 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 20:08:59.405522  358766 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1201 20:08:58.006721  352497 pod_ready.go:104] pod "coredns-7d764666f9-6kzhv" is not "Ready", error: <nil>
	W1201 20:09:00.505892  352497 pod_ready.go:104] pod "coredns-7d764666f9-6kzhv" is not "Ready", error: <nil>
	I1201 20:08:57.749349  363421 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-009682" ...
	I1201 20:08:57.749457  363421 cli_runner.go:164] Run: docker start default-k8s-diff-port-009682
	I1201 20:08:58.018381  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:08:58.043206  363421 kic.go:430] container "default-k8s-diff-port-009682" state is running.
	I1201 20:08:58.043709  363421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:08:58.063866  363421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/config.json ...
	I1201 20:08:58.064140  363421 machine.go:94] provisionDockerMachine start ...
	I1201 20:08:58.064229  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:08:58.083160  363421 main.go:143] libmachine: Using SSH client type: native
	I1201 20:08:58.083444  363421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1201 20:08:58.083458  363421 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:08:58.084209  363421 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37088->127.0.0.1:33133: read: connection reset by peer
	I1201 20:09:01.230589  363421 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-009682
	
	I1201 20:09:01.230617  363421 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-009682"
	I1201 20:09:01.230674  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:01.253348  363421 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:01.253664  363421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1201 20:09:01.253688  363421 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-009682 && echo "default-k8s-diff-port-009682" | sudo tee /etc/hostname
	I1201 20:09:01.411152  363421 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-009682
	
	I1201 20:09:01.411226  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:01.435481  363421 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:01.435749  363421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1201 20:09:01.435776  363421 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-009682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-009682/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-009682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:09:01.579541  363421 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:09:01.579565  363421 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:09:01.579613  363421 ubuntu.go:190] setting up certificates
	I1201 20:09:01.579630  363421 provision.go:84] configureAuth start
	I1201 20:09:01.579679  363421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:09:01.598330  363421 provision.go:143] copyHostCerts
	I1201 20:09:01.598405  363421 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:09:01.598423  363421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:09:01.598511  363421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:09:01.598683  363421 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:09:01.598697  363421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:09:01.598736  363421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:09:01.598833  363421 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:09:01.598844  363421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:09:01.598881  363421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:09:01.598980  363421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-009682 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-009682 localhost minikube]
	I1201 20:09:01.737971  363421 provision.go:177] copyRemoteCerts
	I1201 20:09:01.738050  363421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:09:01.738109  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:01.762885  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:01.874168  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:09:01.893977  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1201 20:09:01.912032  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:09:01.930036  363421 provision.go:87] duration metric: took 350.392221ms to configureAuth
	I1201 20:09:01.930066  363421 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:09:01.930245  363421 config.go:182] Loaded profile config "default-k8s-diff-port-009682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:09:01.930379  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:01.950447  363421 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:01.950661  363421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1201 20:09:01.950679  363421 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:09:02.295040  363421 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:09:02.295063  363421 machine.go:97] duration metric: took 4.230905038s to provisionDockerMachine
	I1201 20:09:02.295074  363421 start.go:293] postStartSetup for "default-k8s-diff-port-009682" (driver="docker")
	I1201 20:09:02.295086  363421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:09:02.295140  363421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:09:02.295192  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:02.314605  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:02.417273  363421 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:09:02.420863  363421 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:09:02.420886  363421 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:09:02.420897  363421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:09:02.420943  363421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:09:02.421012  363421 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:09:02.421096  363421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:09:02.429052  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:02.447160  363421 start.go:296] duration metric: took 152.072363ms for postStartSetup
	I1201 20:09:02.447237  363421 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:09:02.447272  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:02.467442  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:08:59.406942  358766 out.go:252]   - Booting up control plane ...
	I1201 20:08:59.407069  358766 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 20:08:59.407186  358766 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 20:08:59.407725  358766 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 20:08:59.421400  358766 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 20:08:59.421548  358766 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 20:08:59.429946  358766 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 20:08:59.430243  358766 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 20:08:59.430328  358766 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 20:08:59.525457  358766 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 20:08:59.525628  358766 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 20:09:00.027176  358766 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.895523ms
	I1201 20:09:00.029992  358766 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 20:09:00.030115  358766 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1201 20:09:00.030278  358766 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 20:09:00.030365  358766 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 20:09:01.034944  358766 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004762142s
	I1201 20:09:01.771813  358766 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.741647999s
	W1201 20:08:59.205004  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	W1201 20:09:01.709711  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	I1201 20:09:03.531458  358766 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501373313s
	I1201 20:09:03.549804  358766 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1201 20:09:03.560547  358766 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1201 20:09:03.570543  358766 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1201 20:09:03.570792  358766 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-456990 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1201 20:09:03.579453  358766 kubeadm.go:319] [bootstrap-token] Using token: t6nth9.1dme03npps7xtqxg
	I1201 20:09:02.564699  363421 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:09:02.569398  363421 fix.go:56] duration metric: took 4.843608039s for fixHost
	I1201 20:09:02.569438  363421 start.go:83] releasing machines lock for "default-k8s-diff-port-009682", held for 4.843675394s
	I1201 20:09:02.569512  363421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:09:02.588215  363421 ssh_runner.go:195] Run: cat /version.json
	I1201 20:09:02.588256  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:02.588344  363421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:09:02.588479  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:02.607456  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:02.607749  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:02.769630  363421 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:02.777217  363421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:09:02.819594  363421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:09:02.825242  363421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:09:02.825319  363421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:09:02.834483  363421 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:09:02.834510  363421 start.go:496] detecting cgroup driver to use...
	I1201 20:09:02.834562  363421 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:09:02.834631  363421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:09:02.850900  363421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:09:02.866607  363421 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:09:02.866666  363421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:09:02.885043  363421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:09:02.900602  363421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:09:03.001146  363421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:09:03.104903  363421 docker.go:234] disabling docker service ...
	I1201 20:09:03.104982  363421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:09:03.121947  363421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:09:03.139525  363421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:09:03.252507  363421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:09:03.356626  363421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:09:03.369483  363421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:09:03.383959  363421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:09:03.384018  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.392886  363421 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:09:03.392948  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.402431  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.411640  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.422189  363421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:09:03.432194  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.441678  363421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.450620  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.460183  363421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:09:03.467584  363421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:09:03.475047  363421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:03.567439  363421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:09:03.699774  363421 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:09:03.699841  363421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:09:03.704895  363421 start.go:564] Will wait 60s for crictl version
	I1201 20:09:03.704954  363421 ssh_runner.go:195] Run: which crictl
	I1201 20:09:03.708839  363421 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:09:03.734207  363421 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:09:03.734306  363421 ssh_runner.go:195] Run: crio --version
	I1201 20:09:03.768401  363421 ssh_runner.go:195] Run: crio --version
	I1201 20:09:03.804334  363421 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 20:09:03.580798  358766 out.go:252]   - Configuring RBAC rules ...
	I1201 20:09:03.580985  358766 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1201 20:09:03.585627  358766 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1201 20:09:03.591157  358766 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1201 20:09:03.594557  358766 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1201 20:09:03.596997  358766 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1201 20:09:03.599538  358766 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1201 20:09:03.937260  358766 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1201 20:09:04.355604  358766 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1201 20:09:04.940044  358766 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1201 20:09:04.942081  358766 kubeadm.go:319] 
	I1201 20:09:04.942162  358766 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1201 20:09:04.942172  358766 kubeadm.go:319] 
	I1201 20:09:04.942247  358766 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1201 20:09:04.942273  358766 kubeadm.go:319] 
	I1201 20:09:04.942326  358766 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1201 20:09:04.942401  358766 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1201 20:09:04.942553  358766 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1201 20:09:04.942579  358766 kubeadm.go:319] 
	I1201 20:09:04.942671  358766 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1201 20:09:04.942684  358766 kubeadm.go:319] 
	I1201 20:09:04.942747  358766 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1201 20:09:04.942757  358766 kubeadm.go:319] 
	I1201 20:09:04.942813  358766 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1201 20:09:04.942933  358766 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1201 20:09:04.943117  358766 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1201 20:09:04.943129  358766 kubeadm.go:319] 
	I1201 20:09:04.943301  358766 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1201 20:09:04.943409  358766 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1201 20:09:04.943415  358766 kubeadm.go:319] 
	I1201 20:09:04.943527  358766 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token t6nth9.1dme03npps7xtqxg \
	I1201 20:09:04.943664  358766 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:06dc444a5ab7086c8ab7763a4467d2ea9bcadb3b2da24d1940c0cca14b3cdc8a \
	I1201 20:09:04.943691  358766 kubeadm.go:319] 	--control-plane 
	I1201 20:09:04.943696  358766 kubeadm.go:319] 
	I1201 20:09:04.943806  358766 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1201 20:09:04.943811  358766 kubeadm.go:319] 
	I1201 20:09:04.943935  358766 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token t6nth9.1dme03npps7xtqxg \
	I1201 20:09:04.944090  358766 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:06dc444a5ab7086c8ab7763a4467d2ea9bcadb3b2da24d1940c0cca14b3cdc8a 
	I1201 20:09:04.950014  358766 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1201 20:09:04.950166  358766 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 20:09:04.950195  358766 cni.go:84] Creating CNI manager for ""
	I1201 20:09:04.950204  358766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:04.952428  358766 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1201 20:09:03.805467  363421 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-009682 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:09:03.823590  363421 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1201 20:09:03.827746  363421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:03.838256  363421 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:09:03.838431  363421 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:09:03.838501  363421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:09:03.872004  363421 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:09:03.872030  363421 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:09:03.872101  363421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:09:03.903038  363421 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:09:03.903064  363421 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:09:03.903073  363421 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1201 20:09:03.903222  363421 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-009682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:09:03.903358  363421 ssh_runner.go:195] Run: crio config
	I1201 20:09:03.959717  363421 cni.go:84] Creating CNI manager for ""
	I1201 20:09:03.959751  363421 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:03.959774  363421 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:09:03.959806  363421 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-009682 NodeName:default-k8s-diff-port-009682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:09:03.959960  363421 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-009682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:09:03.960038  363421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:09:03.970035  363421 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:09:03.970088  363421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:09:03.981115  363421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1201 20:09:03.997387  363421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:09:04.013157  363421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1201 20:09:04.026334  363421 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:09:04.029983  363421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:04.040473  363421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:04.126425  363421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:04.157017  363421 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682 for IP: 192.168.103.2
	I1201 20:09:04.157048  363421 certs.go:195] generating shared ca certs ...
	I1201 20:09:04.157075  363421 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:04.157268  363421 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:09:04.157363  363421 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:09:04.157388  363421 certs.go:257] generating profile certs ...
	I1201 20:09:04.157486  363421 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.key
	I1201 20:09:04.157547  363421 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key.6e926564
	I1201 20:09:04.157582  363421 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key
	I1201 20:09:04.157719  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:09:04.157763  363421 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:09:04.157774  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:09:04.157807  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:09:04.157844  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:09:04.157878  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:09:04.157927  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:04.158666  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:09:04.181431  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:09:04.214841  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:09:04.239463  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:09:04.265930  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1201 20:09:04.285322  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:09:04.302994  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:09:04.322040  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:09:04.343997  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:09:04.366089  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:09:04.385828  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:09:04.403981  363421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:09:04.416528  363421 ssh_runner.go:195] Run: openssl version
	I1201 20:09:04.423168  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:09:04.431851  363421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:09:04.435576  363421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:09:04.435634  363421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:09:04.472014  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:09:04.480631  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:09:04.489567  363421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:04.493837  363421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:04.493903  363421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:04.529237  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:09:04.538935  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:09:04.547861  363421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:09:04.551700  363421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:09:04.551759  363421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:09:04.587866  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:09:04.597205  363421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:09:04.600927  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:09:04.636786  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:09:04.673583  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:09:04.727932  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:09:04.773666  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:09:04.824841  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:09:04.870082  363421 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:04.870188  363421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:09:04.870248  363421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:09:04.900068  363421 cri.go:89] found id: "ef4ba8d77dd0e9071c7b175fb62f22f9aa86ca30b16bb6d7363c6dc686aac62e"
	I1201 20:09:04.900091  363421 cri.go:89] found id: "b15229721c1e0a47f1f11b128c387218e176a2618444bdeec996eb0d113098d4"
	I1201 20:09:04.900105  363421 cri.go:89] found id: "a1e60ba95082677ce609ab21f3eb49bcc9e9c4f2b4507d8317ccd30fb12c9a8d"
	I1201 20:09:04.900111  363421 cri.go:89] found id: "c037673fa52f79aa510971b202ef75f7b96fdef9c3fc063c32e8c7ef0d11996a"
	I1201 20:09:04.900115  363421 cri.go:89] found id: ""
	I1201 20:09:04.900169  363421 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:09:04.915170  363421 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:04Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:04.915380  363421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:09:04.924568  363421 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:09:04.924589  363421 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:09:04.924636  363421 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:09:04.933995  363421 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:09:04.935868  363421 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-009682" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:04.936660  363421 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-009682" cluster setting kubeconfig missing "default-k8s-diff-port-009682" context setting]
	I1201 20:09:04.937981  363421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:04.940402  363421 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:09:04.953428  363421 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1201 20:09:04.953484  363421 kubeadm.go:602] duration metric: took 28.88936ms to restartPrimaryControlPlane
	I1201 20:09:04.953496  363421 kubeadm.go:403] duration metric: took 83.422203ms to StartCluster
	I1201 20:09:04.953514  363421 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:04.953648  363421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:04.956713  363421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:04.957022  363421 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:09:04.957280  363421 config.go:182] Loaded profile config "default-k8s-diff-port-009682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:09:04.957337  363421 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:09:04.957414  363421 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-009682"
	I1201 20:09:04.957431  363421 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-009682"
	W1201 20:09:04.957439  363421 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:09:04.957463  363421 host.go:66] Checking if "default-k8s-diff-port-009682" exists ...
	I1201 20:09:04.957965  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:04.958147  363421 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-009682"
	I1201 20:09:04.958169  363421 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-009682"
	W1201 20:09:04.958178  363421 addons.go:248] addon dashboard should already be in state true
	I1201 20:09:04.958205  363421 host.go:66] Checking if "default-k8s-diff-port-009682" exists ...
	I1201 20:09:04.958327  363421 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-009682"
	I1201 20:09:04.958364  363421 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-009682"
	I1201 20:09:04.958736  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:04.958772  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:04.962452  363421 out.go:179] * Verifying Kubernetes components...
	W1201 20:09:03.005710  352497 pod_ready.go:104] pod "coredns-7d764666f9-6kzhv" is not "Ready", error: <nil>
	I1201 20:09:03.508253  352497 pod_ready.go:94] pod "coredns-7d764666f9-6kzhv" is "Ready"
	I1201 20:09:03.508310  352497 pod_ready.go:86] duration metric: took 31.508797646s for pod "coredns-7d764666f9-6kzhv" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.512451  352497 pod_ready.go:83] waiting for pod "etcd-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.517492  352497 pod_ready.go:94] pod "etcd-no-preload-240359" is "Ready"
	I1201 20:09:03.517514  352497 pod_ready.go:86] duration metric: took 5.040457ms for pod "etcd-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.519719  352497 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.523698  352497 pod_ready.go:94] pod "kube-apiserver-no-preload-240359" is "Ready"
	I1201 20:09:03.523718  352497 pod_ready.go:86] duration metric: took 3.972027ms for pod "kube-apiserver-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.525515  352497 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.703526  352497 pod_ready.go:94] pod "kube-controller-manager-no-preload-240359" is "Ready"
	I1201 20:09:03.703559  352497 pod_ready.go:86] duration metric: took 178.021828ms for pod "kube-controller-manager-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.903891  352497 pod_ready.go:83] waiting for pod "kube-proxy-zbbsb" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:04.304197  352497 pod_ready.go:94] pod "kube-proxy-zbbsb" is "Ready"
	I1201 20:09:04.304226  352497 pod_ready.go:86] duration metric: took 400.309563ms for pod "kube-proxy-zbbsb" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:04.503580  352497 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:04.906218  352497 pod_ready.go:94] pod "kube-scheduler-no-preload-240359" is "Ready"
	I1201 20:09:04.906257  352497 pod_ready.go:86] duration metric: took 402.653219ms for pod "kube-scheduler-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:04.906272  352497 pod_ready.go:40] duration metric: took 32.911773572s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:09:04.968561  352497 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:09:04.971433  352497 out.go:179] * Done! kubectl is now configured to use "no-preload-240359" cluster and "default" namespace by default
	I1201 20:09:04.964059  363421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:04.995900  363421 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:09:04.997174  363421 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:09:04.997209  363421 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:09:04.998860  363421 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:04.998888  363421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:09:04.998905  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1201 20:09:04.998920  363421 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1201 20:09:04.998954  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:04.998983  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:04.999136  363421 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-009682"
	W1201 20:09:04.999150  363421 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:09:04.999178  363421 host.go:66] Checking if "default-k8s-diff-port-009682" exists ...
	I1201 20:09:04.999898  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:05.045128  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:05.046144  363421 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:05.046164  363421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:09:05.046223  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:05.057331  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:05.078989  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:05.178412  363421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:05.205272  363421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:05.209155  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1201 20:09:05.209177  363421 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1201 20:09:05.212411  363421 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-009682" to be "Ready" ...
	I1201 20:09:05.224200  363421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:05.235326  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1201 20:09:05.235354  363421 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1201 20:09:05.271440  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1201 20:09:05.271468  363421 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1201 20:09:05.298205  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1201 20:09:05.298230  363421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1201 20:09:05.323776  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1201 20:09:05.323811  363421 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1201 20:09:05.348888  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1201 20:09:05.348939  363421 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1201 20:09:05.368484  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1201 20:09:05.368507  363421 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1201 20:09:05.396227  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1201 20:09:05.396254  363421 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1201 20:09:05.428995  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:05.429022  363421 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1201 20:09:05.460762  363421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:06.771032  363421 node_ready.go:49] node "default-k8s-diff-port-009682" is "Ready"
	I1201 20:09:06.771062  363421 node_ready.go:38] duration metric: took 1.558615333s for node "default-k8s-diff-port-009682" to be "Ready" ...
	I1201 20:09:06.771088  363421 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:09:06.771140  363421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:09:07.343358  363421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.137932602s)
	I1201 20:09:07.343426  363421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.119182243s)
	I1201 20:09:07.343531  363421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.8827376s)
	I1201 20:09:07.343581  363421 api_server.go:72] duration metric: took 2.386523736s to wait for apiserver process to appear ...
	I1201 20:09:07.343592  363421 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:09:07.343666  363421 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1201 20:09:07.344907  363421 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-009682 addons enable metrics-server
	
	I1201 20:09:07.349323  363421 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1201 20:09:07.349428  363421 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:07.349453  363421 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:07.350469  363421 addons.go:530] duration metric: took 2.393112876s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1201 20:09:04.953734  358766 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1201 20:09:04.963206  358766 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1201 20:09:04.963275  358766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1201 20:09:04.985147  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1201 20:09:05.377592  358766 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1201 20:09:05.377721  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:05.377810  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-456990 minikube.k8s.io/updated_at=2025_12_01T20_09_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9 minikube.k8s.io/name=newest-cni-456990 minikube.k8s.io/primary=true
	I1201 20:09:05.396988  358766 ops.go:34] apiserver oom_adj: -16
	I1201 20:09:05.488649  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:05.988848  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:06.488812  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:06.989508  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:07.489212  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1201 20:09:04.208582  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	W1201 20:09:06.704348  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	I1201 20:09:07.988921  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:08.488969  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:08.989421  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:09.489572  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:09.556897  358766 kubeadm.go:1114] duration metric: took 4.179215442s to wait for elevateKubeSystemPrivileges
	I1201 20:09:09.556925  358766 kubeadm.go:403] duration metric: took 12.057888116s to StartCluster
	I1201 20:09:09.556942  358766 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:09.557018  358766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:09.561139  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:09.561442  358766 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:09:09.561528  358766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1201 20:09:09.561526  358766 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:09:09.561616  358766 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456990"
	I1201 20:09:09.561635  358766 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456990"
	I1201 20:09:09.561663  358766 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:09.561697  358766 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456990"
	I1201 20:09:09.561707  358766 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:09.561716  358766 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456990"
	I1201 20:09:09.562001  358766 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:09.562203  358766 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:09.563633  358766 out.go:179] * Verifying Kubernetes components...
	I1201 20:09:09.565146  358766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:09.585957  358766 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456990"
	I1201 20:09:09.585993  358766 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:09.586354  358766 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:09.589424  358766 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:09:09.590905  358766 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:09.590926  358766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:09:09.590986  358766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:09.620117  358766 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:09.620141  358766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:09:09.620204  358766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:09.622564  358766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:09.643761  358766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:09.651851  358766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1201 20:09:09.707356  358766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:09.735698  358766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:09.774797  358766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:09.833543  358766 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1201 20:09:09.835322  358766 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:09:09.835378  358766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:09:10.044020  358766 api_server.go:72] duration metric: took 482.54493ms to wait for apiserver process to appear ...
	I1201 20:09:10.044048  358766 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:09:10.044066  358766 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:10.048749  358766 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1201 20:09:10.049558  358766 api_server.go:141] control plane version: v1.35.0-beta.0
	I1201 20:09:10.049578  358766 api_server.go:131] duration metric: took 5.523573ms to wait for apiserver health ...
	I1201 20:09:10.049586  358766 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:09:10.050178  358766 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1201 20:09:10.051821  358766 addons.go:530] duration metric: took 490.303553ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1201 20:09:10.052035  358766 system_pods.go:59] 8 kube-system pods found
	I1201 20:09:10.052063  358766 system_pods.go:61] "coredns-7d764666f9-6t6ld" [f432ca97-c9f1-42a0-999c-c7b0c90658c1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:10.052076  358766 system_pods.go:61] "etcd-newest-cni-456990" [4ab9e88c-f019-49cb-b3b4-0ca5fe01e5bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:09:10.052094  358766 system_pods.go:61] "kindnet-gbbwm" [7386a806-e262-4de4-827f-fcc08a786840] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1201 20:09:10.052103  358766 system_pods.go:61] "kube-apiserver-newest-cni-456990" [f3b68723-7bb4-4725-9863-334f5bb8e2ac] Running
	I1201 20:09:10.052117  358766 system_pods.go:61] "kube-controller-manager-newest-cni-456990" [105b14f4-dc98-400c-b035-c01fff9181ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:09:10.052128  358766 system_pods.go:61] "kube-proxy-gmbzw" [b60069ca-4117-475a-9a2f-5ecd18fca600] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1201 20:09:10.052138  358766 system_pods.go:61] "kube-scheduler-newest-cni-456990" [d4eea582-e65e-440d-9d3e-05c34228b6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:09:10.052148  358766 system_pods.go:61] "storage-provisioner" [7a437438-9384-461e-9867-0fadcabcfea6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:10.052158  358766 system_pods.go:74] duration metric: took 2.56626ms to wait for pod list to return data ...
	I1201 20:09:10.052170  358766 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:09:10.054122  358766 default_sa.go:45] found service account: "default"
	I1201 20:09:10.054138  358766 default_sa.go:55] duration metric: took 1.961704ms for default service account to be created ...
	I1201 20:09:10.054150  358766 kubeadm.go:587] duration metric: took 492.678996ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:10.054169  358766 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:09:10.056013  358766 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:09:10.056034  358766 node_conditions.go:123] node cpu capacity is 8
	I1201 20:09:10.056055  358766 node_conditions.go:105] duration metric: took 1.88044ms to run NodePressure ...
	I1201 20:09:10.056067  358766 start.go:242] waiting for startup goroutines ...
	I1201 20:09:10.338257  358766 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-456990" context rescaled to 1 replicas
	I1201 20:09:10.338330  358766 start.go:247] waiting for cluster config update ...
	I1201 20:09:10.338346  358766 start.go:256] writing updated cluster config ...
	I1201 20:09:10.338608  358766 ssh_runner.go:195] Run: rm -f paused
	I1201 20:09:10.395956  358766 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:09:10.398166  358766 out.go:179] * Done! kubectl is now configured to use "newest-cni-456990" cluster and "default" namespace by default
	I1201 20:09:07.844328  363421 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1201 20:09:07.848970  363421 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:07.849000  363421 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:08.344445  363421 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1201 20:09:08.349233  363421 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1201 20:09:08.350078  363421 api_server.go:141] control plane version: v1.34.2
	I1201 20:09:08.350100  363421 api_server.go:131] duration metric: took 1.006452276s to wait for apiserver health ...
	I1201 20:09:08.350114  363421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:09:08.353556  363421 system_pods.go:59] 8 kube-system pods found
	I1201 20:09:08.353633  363421 system_pods.go:61] "coredns-66bc5c9577-hf646" [959685f2-3196-405c-b2f8-bb177bd28bcf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:09:08.353649  363421 system_pods.go:61] "etcd-default-k8s-diff-port-009682" [1290bc7e-2b19-417b-b878-8b8866ebd5ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:09:08.353658  363421 system_pods.go:61] "kindnet-pqt6x" [358ffbfc-91b7-4ce9-a3ed-987d5af5abcf] Running
	I1201 20:09:08.353673  363421 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-009682" [8a086238-bc1f-4e44-8953-a0dbb4d3081c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:09:08.353687  363421 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-009682" [ea3a59e8-9da7-4c8c-934a-2f80e1445f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:09:08.353694  363421 system_pods.go:61] "kube-proxy-fjn7h" [f4fdbbdd-f85d-420b-b618-6edfd4259349] Running
	I1201 20:09:08.353708  363421 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-009682" [428d94a5-7a6e-464a-9d09-2b39687d913a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:09:08.353713  363421 system_pods.go:61] "storage-provisioner" [329b9699-cf53-4f5f-b7c3-52f77070a59f] Running
	I1201 20:09:08.353720  363421 system_pods.go:74] duration metric: took 3.593864ms to wait for pod list to return data ...
	I1201 20:09:08.353728  363421 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:09:08.356474  363421 default_sa.go:45] found service account: "default"
	I1201 20:09:08.356492  363421 default_sa.go:55] duration metric: took 2.760154ms for default service account to be created ...
	I1201 20:09:08.356500  363421 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:09:08.358911  363421 system_pods.go:86] 8 kube-system pods found
	I1201 20:09:08.358946  363421 system_pods.go:89] "coredns-66bc5c9577-hf646" [959685f2-3196-405c-b2f8-bb177bd28bcf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:09:08.358959  363421 system_pods.go:89] "etcd-default-k8s-diff-port-009682" [1290bc7e-2b19-417b-b878-8b8866ebd5ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:09:08.358965  363421 system_pods.go:89] "kindnet-pqt6x" [358ffbfc-91b7-4ce9-a3ed-987d5af5abcf] Running
	I1201 20:09:08.358974  363421 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-009682" [8a086238-bc1f-4e44-8953-a0dbb4d3081c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:09:08.358985  363421 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-009682" [ea3a59e8-9da7-4c8c-934a-2f80e1445f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:09:08.358992  363421 system_pods.go:89] "kube-proxy-fjn7h" [f4fdbbdd-f85d-420b-b618-6edfd4259349] Running
	I1201 20:09:08.359000  363421 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-009682" [428d94a5-7a6e-464a-9d09-2b39687d913a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:09:08.359006  363421 system_pods.go:89] "storage-provisioner" [329b9699-cf53-4f5f-b7c3-52f77070a59f] Running
	I1201 20:09:08.359014  363421 system_pods.go:126] duration metric: took 2.508618ms to wait for k8s-apps to be running ...
	I1201 20:09:08.359022  363421 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:09:08.359070  363421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:08.372350  363421 system_svc.go:56] duration metric: took 13.321686ms WaitForService to wait for kubelet
	I1201 20:09:08.372373  363421 kubeadm.go:587] duration metric: took 3.41531784s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:09:08.372389  363421 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:09:08.374954  363421 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:09:08.374984  363421 node_conditions.go:123] node cpu capacity is 8
	I1201 20:09:08.375009  363421 node_conditions.go:105] duration metric: took 2.614763ms to run NodePressure ...
	I1201 20:09:08.375026  363421 start.go:242] waiting for startup goroutines ...
	I1201 20:09:08.375057  363421 start.go:247] waiting for cluster config update ...
	I1201 20:09:08.375067  363421 start.go:256] writing updated cluster config ...
	I1201 20:09:08.375354  363421 ssh_runner.go:195] Run: rm -f paused
	I1201 20:09:08.378839  363421 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:09:08.382028  363421 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hf646" in "kube-system" namespace to be "Ready" or be gone ...
	W1201 20:09:10.389146  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:09.204240  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	W1201 20:09:11.206421  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	I1201 20:09:13.704548  354303 pod_ready.go:94] pod "coredns-66bc5c9577-qngk9" is "Ready"
	I1201 20:09:13.704575  354303 pod_ready.go:86] duration metric: took 33.505908319s for pod "coredns-66bc5c9577-qngk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:13.707425  354303 pod_ready.go:83] waiting for pod "etcd-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:13.711749  354303 pod_ready.go:94] pod "etcd-embed-certs-990820" is "Ready"
	I1201 20:09:13.711773  354303 pod_ready.go:86] duration metric: took 4.323983ms for pod "etcd-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:13.713928  354303 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:13.717307  354303 pod_ready.go:94] pod "kube-apiserver-embed-certs-990820" is "Ready"
	I1201 20:09:13.717325  354303 pod_ready.go:86] duration metric: took 3.374812ms for pod "kube-apiserver-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:13.719591  354303 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:13.902994  354303 pod_ready.go:94] pod "kube-controller-manager-embed-certs-990820" is "Ready"
	I1201 20:09:13.903023  354303 pod_ready.go:86] duration metric: took 183.37842ms for pod "kube-controller-manager-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:14.102962  354303 pod_ready.go:83] waiting for pod "kube-proxy-t2nmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:14.503430  354303 pod_ready.go:94] pod "kube-proxy-t2nmz" is "Ready"
	I1201 20:09:14.503456  354303 pod_ready.go:86] duration metric: took 400.471194ms for pod "kube-proxy-t2nmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:14.702981  354303 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:15.102882  354303 pod_ready.go:94] pod "kube-scheduler-embed-certs-990820" is "Ready"
	I1201 20:09:15.102914  354303 pod_ready.go:86] duration metric: took 399.904472ms for pod "kube-scheduler-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:15.102929  354303 pod_ready.go:40] duration metric: took 34.974775887s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:09:15.148041  354303 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 20:09:15.149776  354303 out.go:179] * Done! kubectl is now configured to use "embed-certs-990820" cluster and "default" namespace by default
	W1201 20:09:12.888555  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:15.388530  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:17.388819  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 01 20:08:43 no-preload-240359 crio[569]: time="2025-12-01T20:08:43.209735265Z" level=info msg="Started container" PID=1731 containerID=2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6/dashboard-metrics-scraper id=375b63f4-df96-4933-af04-5d40a651f69f name=/runtime.v1.RuntimeService/StartContainer sandboxID=8017bf33122582d1e59f276fc60b3d8d3c26a9dac8e48b29a3fe7329713e84b2
	Dec 01 20:08:44 no-preload-240359 crio[569]: time="2025-12-01T20:08:44.169233746Z" level=info msg="Removing container: d2b4c96946ed8e70164e7bb47617ef1647422cb6c39a123be5e8cdab046738ba" id=befa0468-d474-4215-ade5-6d22ce42c3ec name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:08:44 no-preload-240359 crio[569]: time="2025-12-01T20:08:44.181957192Z" level=info msg="Removed container d2b4c96946ed8e70164e7bb47617ef1647422cb6c39a123be5e8cdab046738ba: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6/dashboard-metrics-scraper" id=befa0468-d474-4215-ade5-6d22ce42c3ec name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.094365655Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=86fe568e-ef9b-41cd-af44-0674c9aa5ff0 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.097016153Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f42c7b61-7de7-48dd-80b8-0e959759494a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.100262046Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6/dashboard-metrics-scraper" id=441139c5-b807-490f-9f55-900846424451 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.10042383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.108077325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.108788707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.134183466Z" level=info msg="Created container cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6/dashboard-metrics-scraper" id=441139c5-b807-490f-9f55-900846424451 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.134823428Z" level=info msg="Starting container: cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603" id=bf996bda-7a24-4b15-b8b6-6be90eb4c1b8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.136897409Z" level=info msg="Started container" PID=1743 containerID=cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6/dashboard-metrics-scraper id=bf996bda-7a24-4b15-b8b6-6be90eb4c1b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8017bf33122582d1e59f276fc60b3d8d3c26a9dac8e48b29a3fe7329713e84b2
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.202084124Z" level=info msg="Removing container: 2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b" id=20b8083a-7af6-4c57-a092-7d9c542dc8ea name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.212673811Z" level=info msg="Removed container 2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6/dashboard-metrics-scraper" id=20b8083a-7af6-4c57-a092-7d9c542dc8ea name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.222273612Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=88dff7a0-6dc9-4b9b-ab59-75107d543af4 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.223316939Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9917d42d-fcf7-41e6-8c1b-0fd17d4f1345 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.224476876Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3e9bfc97-822d-4df4-9917-65f9ad1ee75f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.22472703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.229129325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.229320758Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b8447004c46863bfa1d5ad58a729531f777916fd4fea8e3d868322e1d903e677/merged/etc/passwd: no such file or directory"
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.229352706Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b8447004c46863bfa1d5ad58a729531f777916fd4fea8e3d868322e1d903e677/merged/etc/group: no such file or directory"
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.229558089Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.260231519Z" level=info msg="Created container a765c7d4cc7dc1f4724ff9ae9c28c386601a6256b8f12a3058791b6a4f566457: kube-system/storage-provisioner/storage-provisioner" id=3e9bfc97-822d-4df4-9917-65f9ad1ee75f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.261021546Z" level=info msg="Starting container: a765c7d4cc7dc1f4724ff9ae9c28c386601a6256b8f12a3058791b6a4f566457" id=4dbc5cb9-f143-4a1e-8737-82dac7614d17 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.262987092Z" level=info msg="Started container" PID=1757 containerID=a765c7d4cc7dc1f4724ff9ae9c28c386601a6256b8f12a3058791b6a4f566457 description=kube-system/storage-provisioner/storage-provisioner id=4dbc5cb9-f143-4a1e-8737-82dac7614d17 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9eab2fd2bff38473f0da79ece9306de53007b2431ebe527cc472142687e387d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a765c7d4cc7dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   f9eab2fd2bff3       storage-provisioner                          kube-system
	cb84798749e15       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   8017bf3312258       dashboard-metrics-scraper-867fb5f87b-fgll6   kubernetes-dashboard
	ba8a74fae657c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   309ebc476bc92       kubernetes-dashboard-b84665fb8-f7grf         kubernetes-dashboard
	a45016736542b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   b75982ce3d8bd       busybox                                      default
	510968b59805a       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           48 seconds ago      Running             coredns                     0                   7ef1387c72e02       coredns-7d764666f9-6kzhv                     kube-system
	2f185801b7d0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   f9eab2fd2bff3       storage-provisioner                          kube-system
	844ba0fcae08d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   32cf85dfd63f4       kindnet-s7r55                                kube-system
	de746e8ab3a57       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           48 seconds ago      Running             kube-proxy                  0                   1241a9545ea16       kube-proxy-zbbsb                             kube-system
	6b752f5fa5d25       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           51 seconds ago      Running             kube-controller-manager     0                   f4094ae54b5e3       kube-controller-manager-no-preload-240359    kube-system
	e49b2d4ba56ef       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           51 seconds ago      Running             etcd                        0                   aa45fbe1b1335       etcd-no-preload-240359                       kube-system
	29cdf91985783       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           51 seconds ago      Running             kube-scheduler              0                   69aad79bb66af       kube-scheduler-no-preload-240359             kube-system
	36005a70764f4       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           51 seconds ago      Running             kube-apiserver              0                   05a86b5e32e16       kube-apiserver-no-preload-240359             kube-system
	
	
	==> coredns [510968b59805a625501e44f964dc5dbaaeca09bb0e1fad75aead446e677e99e2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51640 - 6539 "HINFO IN 2956688600488665119.1571566898641387574. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02349224s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-240359
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-240359
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=no-preload-240359
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_07_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:07:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-240359
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:09:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:09:11 +0000   Mon, 01 Dec 2025 20:07:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:09:11 +0000   Mon, 01 Dec 2025 20:07:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:09:11 +0000   Mon, 01 Dec 2025 20:07:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:09:11 +0000   Mon, 01 Dec 2025 20:07:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-240359
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                061d53b4-7f5d-40c9-8604-f01915628ca1
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-6kzhv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-no-preload-240359                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-s7r55                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-240359              250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-240359     200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-zbbsb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-240359              100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-fgll6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-f7grf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node no-preload-240359 event: Registered Node no-preload-240359 in Controller
	  Normal  RegisteredNode  46s   node-controller  Node no-preload-240359 event: Registered Node no-preload-240359 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [e49b2d4ba56ef1c2e40ddb43da58758bdbf5d919d3c69e15fb12ddd94e3859e6] <==
	{"level":"warn","ts":"2025-12-01T20:08:29.694686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.702019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.709509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.716944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.723741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.735210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.741363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.748566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.763495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.770799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.778102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.785680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.793407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.800447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.807834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.814437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.821045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.827884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.835212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.854419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.861250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.868221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.876619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.931814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:56.938248Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.019685ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597438231491771 > lease_revoke:<id:06ed9adb87e93420>","response":"size:28"}
	
	
	==> kernel <==
	 20:09:19 up  1:51,  0 user,  load average: 3.92, 3.39, 2.39
	Linux no-preload-240359 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [844ba0fcae08d61a06cb533c6dd7bc40ecb98db5d968faabcf4760594e3545c0] <==
	I1201 20:08:31.691636       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:08:31.691963       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1201 20:08:31.692204       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:08:31.692230       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:08:31.692259       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:08:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:08:31.893472       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:08:31.893545       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:08:31.893560       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:08:31.893757       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:08:32.194650       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:08:32.194682       1 metrics.go:72] Registering metrics
	I1201 20:08:32.194754       1 controller.go:711] "Syncing nftables rules"
	I1201 20:08:41.893448       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 20:08:41.893527       1 main.go:301] handling current node
	I1201 20:08:51.893587       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 20:08:51.893625       1 main.go:301] handling current node
	I1201 20:09:01.893380       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 20:09:01.893411       1 main.go:301] handling current node
	I1201 20:09:11.893685       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 20:09:11.893732       1 main.go:301] handling current node
	
	
	==> kube-apiserver [36005a70764f454efe8261a6e2c055592d11b2995f54692acfa06be75c01e231] <==
	I1201 20:08:30.433562       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:30.433591       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:30.434375       1 aggregator.go:187] initial CRD sync complete...
	I1201 20:08:30.434388       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 20:08:30.434395       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:08:30.434403       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:08:30.434634       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:30.440045       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1201 20:08:30.440435       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1201 20:08:30.440479       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1201 20:08:30.444806       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1201 20:08:30.449876       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1201 20:08:30.464489       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1201 20:08:30.464627       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:08:30.728449       1 controller.go:667] quota admission added evaluator for: namespaces
	I1201 20:08:30.760483       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:08:30.781929       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:08:30.790021       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:08:30.798415       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:08:30.833802       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.204.247"}
	I1201 20:08:30.845993       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.169.63"}
	I1201 20:08:31.337557       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1201 20:08:34.033919       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:08:34.136892       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:08:34.234714       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6b752f5fa5d255e1175b4bd1269edc34ac8b33b4ccd5fd8ef5ee42c1138e4140] <==
	I1201 20:08:33.595986       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.596010       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.596082       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.596920       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:08:33.597131       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.601778       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602036       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602085       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602179       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.601893       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602252       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602273       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602527       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602305       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602698       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602736       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.604195       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1201 20:08:33.604321       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-240359"
	I1201 20:08:33.604399       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1201 20:08:33.607802       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.607963       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.697685       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.702853       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.702877       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1201 20:08:33.702883       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [de746e8ab3a57e792862aca89bb9e8210ee00df2dcb4ec56548296e6b1618ac7] <==
	I1201 20:08:31.503054       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:08:31.570576       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:08:31.671374       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:31.671405       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1201 20:08:31.671520       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:08:31.693042       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:08:31.693108       1 server_linux.go:136] "Using iptables Proxier"
	I1201 20:08:31.698498       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:08:31.698986       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1201 20:08:31.699053       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:08:31.701330       1 config.go:200] "Starting service config controller"
	I1201 20:08:31.701364       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:08:31.701404       1 config.go:309] "Starting node config controller"
	I1201 20:08:31.701419       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:08:31.701425       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:08:31.701521       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:08:31.701529       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:08:31.701546       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:08:31.701550       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:08:31.801556       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:08:31.801616       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:08:31.801625       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [29cdf919857836c121bb0ca4a31dd8000e82c51bc59f779d45be989f90169f51] <==
	I1201 20:08:28.976817       1 serving.go:386] Generated self-signed cert in-memory
	I1201 20:08:30.402549       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1201 20:08:30.402591       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:08:30.409904       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:08:30.409930       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:08:30.410049       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1201 20:08:30.410132       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:08:30.410103       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1201 20:08:30.410183       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:08:30.410460       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 20:08:30.410757       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 20:08:30.510725       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:30.510868       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:30.511047       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 01 20:08:44 no-preload-240359 kubelet[722]: E1201 20:08:44.167858     722 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-240359" containerName="kube-scheduler"
	Dec 01 20:08:44 no-preload-240359 kubelet[722]: E1201 20:08:44.167990     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" containerName="dashboard-metrics-scraper"
	Dec 01 20:08:44 no-preload-240359 kubelet[722]: I1201 20:08:44.168018     722 scope.go:122] "RemoveContainer" containerID="2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b"
	Dec 01 20:08:44 no-preload-240359 kubelet[722]: E1201 20:08:44.168220     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-fgll6_kubernetes-dashboard(145da350-1d51-42ff-9118-f36bcf5024a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" podUID="145da350-1d51-42ff-9118-f36bcf5024a2"
	Dec 01 20:08:45 no-preload-240359 kubelet[722]: E1201 20:08:45.172511     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" containerName="dashboard-metrics-scraper"
	Dec 01 20:08:45 no-preload-240359 kubelet[722]: I1201 20:08:45.172547     722 scope.go:122] "RemoveContainer" containerID="2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b"
	Dec 01 20:08:45 no-preload-240359 kubelet[722]: E1201 20:08:45.172768     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-fgll6_kubernetes-dashboard(145da350-1d51-42ff-9118-f36bcf5024a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" podUID="145da350-1d51-42ff-9118-f36bcf5024a2"
	Dec 01 20:08:46 no-preload-240359 kubelet[722]: E1201 20:08:46.175474     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" containerName="dashboard-metrics-scraper"
	Dec 01 20:08:46 no-preload-240359 kubelet[722]: I1201 20:08:46.175512     722 scope.go:122] "RemoveContainer" containerID="2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b"
	Dec 01 20:08:46 no-preload-240359 kubelet[722]: E1201 20:08:46.175711     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-fgll6_kubernetes-dashboard(145da350-1d51-42ff-9118-f36bcf5024a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" podUID="145da350-1d51-42ff-9118-f36bcf5024a2"
	Dec 01 20:08:55 no-preload-240359 kubelet[722]: E1201 20:08:55.093737     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" containerName="dashboard-metrics-scraper"
	Dec 01 20:08:55 no-preload-240359 kubelet[722]: I1201 20:08:55.093779     722 scope.go:122] "RemoveContainer" containerID="2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b"
	Dec 01 20:08:55 no-preload-240359 kubelet[722]: I1201 20:08:55.200603     722 scope.go:122] "RemoveContainer" containerID="2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b"
	Dec 01 20:08:55 no-preload-240359 kubelet[722]: E1201 20:08:55.200904     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" containerName="dashboard-metrics-scraper"
	Dec 01 20:08:55 no-preload-240359 kubelet[722]: I1201 20:08:55.200942     722 scope.go:122] "RemoveContainer" containerID="cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603"
	Dec 01 20:08:55 no-preload-240359 kubelet[722]: E1201 20:08:55.201154     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-fgll6_kubernetes-dashboard(145da350-1d51-42ff-9118-f36bcf5024a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" podUID="145da350-1d51-42ff-9118-f36bcf5024a2"
	Dec 01 20:08:56 no-preload-240359 kubelet[722]: E1201 20:08:56.205966     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" containerName="dashboard-metrics-scraper"
	Dec 01 20:08:56 no-preload-240359 kubelet[722]: I1201 20:08:56.206021     722 scope.go:122] "RemoveContainer" containerID="cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603"
	Dec 01 20:08:56 no-preload-240359 kubelet[722]: E1201 20:08:56.206243     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-fgll6_kubernetes-dashboard(145da350-1d51-42ff-9118-f36bcf5024a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" podUID="145da350-1d51-42ff-9118-f36bcf5024a2"
	Dec 01 20:09:02 no-preload-240359 kubelet[722]: I1201 20:09:02.221811     722 scope.go:122] "RemoveContainer" containerID="2f185801b7d0fadbf2e0686871d2c9ac6150a3fae2b8fb8f9807e45e9254f1bf"
	Dec 01 20:09:03 no-preload-240359 kubelet[722]: E1201 20:09:03.126752     722 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6kzhv" containerName="coredns"
	Dec 01 20:09:17 no-preload-240359 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 20:09:17 no-preload-240359 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 20:09:17 no-preload-240359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 20:09:17 no-preload-240359 systemd[1]: kubelet.service: Consumed 1.708s CPU time.
	
	
	==> kubernetes-dashboard [ba8a74fae657c1cb17397fdb3a557728f7746032c8530cacd94377d02328e38e] <==
	2025/12/01 20:08:39 Starting overwatch
	2025/12/01 20:08:39 Using namespace: kubernetes-dashboard
	2025/12/01 20:08:39 Using in-cluster config to connect to apiserver
	2025/12/01 20:08:39 Using secret token for csrf signing
	2025/12/01 20:08:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/01 20:08:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/01 20:08:39 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/01 20:08:39 Generating JWE encryption key
	2025/12/01 20:08:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/01 20:08:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/01 20:08:40 Initializing JWE encryption key from synchronized object
	2025/12/01 20:08:40 Creating in-cluster Sidecar client
	2025/12/01 20:08:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/01 20:08:40 Serving insecurely on HTTP port: 9090
	2025/12/01 20:09:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2f185801b7d0fadbf2e0686871d2c9ac6150a3fae2b8fb8f9807e45e9254f1bf] <==
	I1201 20:08:31.468668       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1201 20:09:01.471825       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a765c7d4cc7dc1f4724ff9ae9c28c386601a6256b8f12a3058791b6a4f566457] <==
	I1201 20:09:02.276685       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1201 20:09:02.284953       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1201 20:09:02.285008       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1201 20:09:02.287475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:05.742870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:10.003743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:13.602157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:16.656515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:19.679002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:19.683211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:09:19.683387       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1201 20:09:19.683530       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-240359_8d92b54d-9ac2-4f5f-970d-ad05b7892521!
	I1201 20:09:19.683531       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8fcc8b66-6889-4a42-8e02-82e3bfaf2063", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-240359_8d92b54d-9ac2-4f5f-970d-ad05b7892521 became leader
	W1201 20:09:19.685497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:19.689368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:09:19.783771       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-240359_8d92b54d-9ac2-4f5f-970d-ad05b7892521!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-240359 -n no-preload-240359
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-240359 -n no-preload-240359: exit status 2 (329.651912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-240359 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-240359
helpers_test.go:243: (dbg) docker inspect no-preload-240359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340",
	        "Created": "2025-12-01T20:07:06.01914801Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 352701,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:08:21.884818011Z",
	            "FinishedAt": "2025-12-01T20:08:20.490851656Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340/hostname",
	        "HostsPath": "/var/lib/docker/containers/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340/hosts",
	        "LogPath": "/var/lib/docker/containers/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340/52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340-json.log",
	        "Name": "/no-preload-240359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-240359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-240359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "52fdbf3aa5c525a500606da7926c622143168800093b438c6161135098788340",
	                "LowerDir": "/var/lib/docker/overlay2/68d93e71280e719e54de04ab62d6ba0bb0b66e6908f589a7add78b928696e04f-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68d93e71280e719e54de04ab62d6ba0bb0b66e6908f589a7add78b928696e04f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68d93e71280e719e54de04ab62d6ba0bb0b66e6908f589a7add78b928696e04f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68d93e71280e719e54de04ab62d6ba0bb0b66e6908f589a7add78b928696e04f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-240359",
	                "Source": "/var/lib/docker/volumes/no-preload-240359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-240359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-240359",
	                "name.minikube.sigs.k8s.io": "no-preload-240359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b0930adfcc271d0647ed94570295bcb637228a0a45bd8b4334dacd5f7800b88c",
	            "SandboxKey": "/var/run/docker/netns/b0930adfcc27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-240359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9442b61c89479da474674c2efe3f782398fb10944284ed674aaa668317b06131",
	                    "EndpointID": "87076adabffbdcd6a2bcc47050d1b93ceb9df7a72a9b70f1b35d7cfb77d50b64",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6a:ac:01:cd:75:42",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-240359",
	                        "52fdbf3aa5c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-240359 -n no-preload-240359
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-240359 -n no-preload-240359: exit status 2 (324.765414ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-240359 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-240359 logs -n 25: (1.306470901s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p disable-driver-mounts-003720                                                                                                                                                                                                                      │ disable-driver-mounts-003720 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-217464 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:07 UTC │
	│ start   │ -p old-k8s-version-217464 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:07 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-240359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p no-preload-240359 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-990820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p embed-certs-990820 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p no-preload-240359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ old-k8s-version-217464 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ pause   │ -p old-k8s-version-217464 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-990820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ stop    │ -p default-k8s-diff-port-009682 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-009682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-456990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-456990 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ image   │ no-preload-240359 image list --format=json                                                                                                                                                                                                           │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p no-preload-240359 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:08:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:08:57.524741  363421 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:08:57.524856  363421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:57.524866  363421 out.go:374] Setting ErrFile to fd 2...
	I1201 20:08:57.524872  363421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:08:57.525166  363421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:08:57.525742  363421 out.go:368] Setting JSON to false
	I1201 20:08:57.527230  363421 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6688,"bootTime":1764613049,"procs":364,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:08:57.527326  363421 start.go:143] virtualization: kvm guest
	I1201 20:08:57.529688  363421 out.go:179] * [default-k8s-diff-port-009682] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:08:57.530978  363421 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:08:57.530985  363421 notify.go:221] Checking for updates...
	I1201 20:08:57.532313  363421 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:08:57.533552  363421 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:08:57.534766  363421 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:08:57.535947  363421 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:08:57.537115  363421 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:08:57.538758  363421 config.go:182] Loaded profile config "default-k8s-diff-port-009682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:08:57.539252  363421 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:08:57.564657  363421 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:08:57.564748  363421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:57.627789  363421 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:83 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-01 20:08:57.613982153 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:57.627885  363421 docker.go:319] overlay module found
	I1201 20:08:57.629736  363421 out.go:179] * Using the docker driver based on existing profile
	I1201 20:08:57.630805  363421 start.go:309] selected driver: docker
	I1201 20:08:57.630817  363421 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:57.630891  363421 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:08:57.631486  363421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:08:57.694034  363421 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-01 20:08:57.682818846 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:08:57.694423  363421 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:08:57.694466  363421 cni.go:84] Creating CNI manager for ""
	I1201 20:08:57.694533  363421 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:57.694577  363421 start.go:353] cluster config:
	{Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:57.696647  363421 out.go:179] * Starting "default-k8s-diff-port-009682" primary control-plane node in "default-k8s-diff-port-009682" cluster
	I1201 20:08:57.697915  363421 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:08:57.699088  363421 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:08:54.033979  358766 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.451822367s)
	I1201 20:08:54.034006  358766 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1201 20:08:54.034040  358766 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1201 20:08:54.034079  358766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1201 20:08:55.285959  358766 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.251855407s)
	I1201 20:08:55.285986  358766 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1201 20:08:55.286009  358766 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1201 20:08:55.286056  358766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1201 20:08:55.835833  358766 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1201 20:08:55.835878  358766 cache_images.go:125] Successfully loaded all cached images
	I1201 20:08:55.835887  358766 cache_images.go:94] duration metric: took 9.220203533s to LoadCachedImages
	I1201 20:08:55.835902  358766 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:08:55.836000  358766 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-456990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:08:55.836092  358766 ssh_runner.go:195] Run: crio config
	I1201 20:08:55.882185  358766 cni.go:84] Creating CNI manager for ""
	I1201 20:08:55.882204  358766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:08:55.882221  358766 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1201 20:08:55.882240  358766 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456990 NodeName:newest-cni-456990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:08:55.882388  358766 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-456990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:08:55.882456  358766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:08:55.896478  358766 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1201 20:08:55.896542  358766 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:08:55.905428  358766 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256
	I1201 20:08:55.905471  358766 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1201 20:08:55.905478  358766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:08:55.905492  358766 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256
	I1201 20:08:55.905548  358766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1201 20:08:55.905560  358766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1201 20:08:55.924100  358766 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1201 20:08:55.924135  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1201 20:08:55.924162  358766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1201 20:08:55.924163  358766 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1201 20:08:55.924196  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1201 20:08:55.931240  358766 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1201 20:08:55.931269  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1201 20:08:56.484601  358766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:08:56.493223  358766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:08:56.506733  358766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:08:56.551910  358766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1201 20:08:56.565479  358766 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:08:56.569659  358766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:08:56.674504  358766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:08:56.766035  358766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:08:56.790444  358766 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990 for IP: 192.168.76.2
	I1201 20:08:56.790466  358766 certs.go:195] generating shared ca certs ...
	I1201 20:08:56.790488  358766 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:56.790666  358766 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:08:56.790711  358766 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:08:56.790722  358766 certs.go:257] generating profile certs ...
	I1201 20:08:56.790775  358766 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key
	I1201 20:08:56.790787  358766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.crt with IP's: []
	I1201 20:08:56.856182  358766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.crt ...
	I1201 20:08:56.856207  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.crt: {Name:mk188d1d1ba3b1359a8c4c959ae5d3c192a20a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:56.856394  358766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key ...
	I1201 20:08:56.856408  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key: {Name:mkb94c2da30d31143505840f4576d1cd1a4db927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:56.856490  358766 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757
	I1201 20:08:56.856504  358766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt.79f10757 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1201 20:08:57.050302  358766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt.79f10757 ...
	I1201 20:08:57.050328  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt.79f10757: {Name:mkeefb489f4b625e46090918386fdc47c61b5f6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:57.050500  358766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757 ...
	I1201 20:08:57.050517  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757: {Name:mkf596c61e744a065cd8401e41d8e454de70b079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:57.050632  358766 certs.go:382] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt.79f10757 -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt
	I1201 20:08:57.050717  358766 certs.go:386] copying /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757 -> /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key
	I1201 20:08:57.050771  358766 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key
	I1201 20:08:57.050786  358766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt with IP's: []
	I1201 20:08:57.090707  358766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt ...
	I1201 20:08:57.090730  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt: {Name:mk173cd6fe67eab6f70384a04dff60d8ad263813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:57.090894  358766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key ...
	I1201 20:08:57.090908  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key: {Name:mk07102f58d64e403b75622a5498a55b5a7d2682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:08:57.091078  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:08:57.091119  358766 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:08:57.091129  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:08:57.091155  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:08:57.091178  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:08:57.091204  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:08:57.091249  358766 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:08:57.091846  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:08:57.110296  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:08:57.127543  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:08:57.145135  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:08:57.161965  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:08:57.178832  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:08:57.196202  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:08:57.216297  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:08:57.235646  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:08:57.255802  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:08:57.274205  358766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:08:57.291845  358766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:08:57.305221  358766 ssh_runner.go:195] Run: openssl version
	I1201 20:08:57.311715  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:08:57.321501  358766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:08:57.325823  358766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:08:57.325889  358766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:08:57.365528  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:08:57.375267  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:08:57.384499  358766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:57.388796  358766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:57.388853  358766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:08:57.427537  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:08:57.436653  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:08:57.446332  358766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:08:57.450883  358766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:08:57.450941  358766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:08:57.485407  358766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:08:57.494810  358766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:08:57.498985  358766 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 20:08:57.499041  358766 kubeadm.go:401] StartCluster: {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:08:57.499130  358766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:08:57.499181  358766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:08:57.528197  358766 cri.go:89] found id: ""
	I1201 20:08:57.528247  358766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:08:57.536955  358766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:08:57.545150  358766 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1201 20:08:57.545217  358766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:08:57.553840  358766 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 20:08:57.553872  358766 kubeadm.go:158] found existing configuration files:
	
	I1201 20:08:57.553923  358766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 20:08:57.562547  358766 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 20:08:57.562603  358766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 20:08:57.570825  358766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 20:08:57.579016  358766 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 20:08:57.579104  358766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:08:57.588155  358766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 20:08:57.598007  358766 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 20:08:57.598081  358766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:08:57.607460  358766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 20:08:57.616501  358766 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 20:08:57.616576  358766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:08:57.625112  358766 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1201 20:08:57.668430  358766 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1201 20:08:57.668522  358766 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 20:08:57.700560  363421 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:08:57.700599  363421 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 20:08:57.700606  363421 cache.go:65] Caching tarball of preloaded images
	I1201 20:08:57.700646  363421 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1201 20:08:57.700699  363421 preload.go:238] Found /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:08:57.700709  363421 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:08:57.700830  363421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/config.json ...
	I1201 20:08:57.725595  363421 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:08:57.725622  363421 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1201 20:08:57.725643  363421 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:08:57.725678  363421 start.go:360] acquireMachinesLock for default-k8s-diff-port-009682: {Name:mk42586c39f050856fb58aa29e83d0a77c4546b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:08:57.725749  363421 start.go:364] duration metric: took 47.794µs to acquireMachinesLock for "default-k8s-diff-port-009682"
	I1201 20:08:57.725771  363421 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:08:57.725786  363421 fix.go:54] fixHost starting: 
	I1201 20:08:57.726056  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:08:57.747795  363421 fix.go:112] recreateIfNeeded on default-k8s-diff-port-009682: state=Stopped err=<nil>
	W1201 20:08:57.747827  363421 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 20:08:57.757685  358766 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1201 20:08:57.757794  358766 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1201 20:08:57.757867  358766 kubeadm.go:319] OS: Linux
	I1201 20:08:57.757937  358766 kubeadm.go:319] CGROUPS_CPU: enabled
	I1201 20:08:57.758000  358766 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1201 20:08:57.758103  358766 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1201 20:08:57.758195  358766 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1201 20:08:57.758280  358766 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1201 20:08:57.758368  358766 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1201 20:08:57.758454  358766 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1201 20:08:57.758515  358766 kubeadm.go:319] CGROUPS_IO: enabled
	I1201 20:08:57.824201  358766 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 20:08:57.824361  358766 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 20:08:57.824478  358766 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 20:08:57.839908  358766 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1201 20:08:54.705077  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	W1201 20:08:57.204772  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	I1201 20:08:57.842269  358766 out.go:252]   - Generating certificates and keys ...
	I1201 20:08:57.842407  358766 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 20:08:57.842551  358766 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 20:08:57.881252  358766 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 20:08:58.037461  358766 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 20:08:58.107548  358766 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 20:08:58.187232  358766 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 20:08:58.505054  358766 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 20:08:58.505252  358766 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-456990] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1201 20:08:58.539384  358766 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 20:08:58.539557  358766 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-456990] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1201 20:08:58.601325  358766 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 20:08:58.651270  358766 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 20:08:58.937961  358766 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 20:08:58.938159  358766 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 20:08:59.070341  358766 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 20:08:59.130405  358766 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 20:08:59.174058  358766 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 20:08:59.235555  358766 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 20:08:59.401392  358766 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 20:08:59.401904  358766 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 20:08:59.405522  358766 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1201 20:08:58.006721  352497 pod_ready.go:104] pod "coredns-7d764666f9-6kzhv" is not "Ready", error: <nil>
	W1201 20:09:00.505892  352497 pod_ready.go:104] pod "coredns-7d764666f9-6kzhv" is not "Ready", error: <nil>
	I1201 20:08:57.749349  363421 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-009682" ...
	I1201 20:08:57.749457  363421 cli_runner.go:164] Run: docker start default-k8s-diff-port-009682
	I1201 20:08:58.018381  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:08:58.043206  363421 kic.go:430] container "default-k8s-diff-port-009682" state is running.
	I1201 20:08:58.043709  363421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:08:58.063866  363421 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/config.json ...
	I1201 20:08:58.064140  363421 machine.go:94] provisionDockerMachine start ...
	I1201 20:08:58.064229  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:08:58.083160  363421 main.go:143] libmachine: Using SSH client type: native
	I1201 20:08:58.083444  363421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1201 20:08:58.083458  363421 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:08:58.084209  363421 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37088->127.0.0.1:33133: read: connection reset by peer
	I1201 20:09:01.230589  363421 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-009682
	
	I1201 20:09:01.230617  363421 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-009682"
	I1201 20:09:01.230674  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:01.253348  363421 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:01.253664  363421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1201 20:09:01.253688  363421 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-009682 && echo "default-k8s-diff-port-009682" | sudo tee /etc/hostname
	I1201 20:09:01.411152  363421 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-009682
	
	I1201 20:09:01.411226  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:01.435481  363421 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:01.435749  363421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1201 20:09:01.435776  363421 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-009682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-009682/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-009682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:09:01.579541  363421 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:09:01.579565  363421 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:09:01.579613  363421 ubuntu.go:190] setting up certificates
	I1201 20:09:01.579630  363421 provision.go:84] configureAuth start
	I1201 20:09:01.579679  363421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:09:01.598330  363421 provision.go:143] copyHostCerts
	I1201 20:09:01.598405  363421 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:09:01.598423  363421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:09:01.598511  363421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:09:01.598683  363421 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:09:01.598697  363421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:09:01.598736  363421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:09:01.598833  363421 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:09:01.598844  363421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:09:01.598881  363421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:09:01.598980  363421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-009682 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-009682 localhost minikube]
	I1201 20:09:01.737971  363421 provision.go:177] copyRemoteCerts
	I1201 20:09:01.738050  363421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:09:01.738109  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:01.762885  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:01.874168  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:09:01.893977  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1201 20:09:01.912032  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:09:01.930036  363421 provision.go:87] duration metric: took 350.392221ms to configureAuth
	I1201 20:09:01.930066  363421 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:09:01.930245  363421 config.go:182] Loaded profile config "default-k8s-diff-port-009682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:09:01.930379  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:01.950447  363421 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:01.950661  363421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1201 20:09:01.950679  363421 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:09:02.295040  363421 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:09:02.295063  363421 machine.go:97] duration metric: took 4.230905038s to provisionDockerMachine
	I1201 20:09:02.295074  363421 start.go:293] postStartSetup for "default-k8s-diff-port-009682" (driver="docker")
	I1201 20:09:02.295086  363421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:09:02.295140  363421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:09:02.295192  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:02.314605  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:02.417273  363421 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:09:02.420863  363421 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:09:02.420886  363421 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:09:02.420897  363421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:09:02.420943  363421 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:09:02.421012  363421 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:09:02.421096  363421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:09:02.429052  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:02.447160  363421 start.go:296] duration metric: took 152.072363ms for postStartSetup
	I1201 20:09:02.447237  363421 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:09:02.447272  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:02.467442  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:08:59.406942  358766 out.go:252]   - Booting up control plane ...
	I1201 20:08:59.407069  358766 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 20:08:59.407186  358766 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 20:08:59.407725  358766 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 20:08:59.421400  358766 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 20:08:59.421548  358766 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 20:08:59.429946  358766 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 20:08:59.430243  358766 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 20:08:59.430328  358766 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 20:08:59.525457  358766 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 20:08:59.525628  358766 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 20:09:00.027176  358766 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.895523ms
	I1201 20:09:00.029992  358766 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 20:09:00.030115  358766 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1201 20:09:00.030278  358766 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 20:09:00.030365  358766 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 20:09:01.034944  358766 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004762142s
	I1201 20:09:01.771813  358766 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.741647999s
	W1201 20:08:59.205004  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	W1201 20:09:01.709711  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	I1201 20:09:03.531458  358766 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501373313s
	I1201 20:09:03.549804  358766 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1201 20:09:03.560547  358766 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1201 20:09:03.570543  358766 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1201 20:09:03.570792  358766 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-456990 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1201 20:09:03.579453  358766 kubeadm.go:319] [bootstrap-token] Using token: t6nth9.1dme03npps7xtqxg
	I1201 20:09:02.564699  363421 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:09:02.569398  363421 fix.go:56] duration metric: took 4.843608039s for fixHost
	I1201 20:09:02.569438  363421 start.go:83] releasing machines lock for "default-k8s-diff-port-009682", held for 4.843675394s
	I1201 20:09:02.569512  363421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-009682
	I1201 20:09:02.588215  363421 ssh_runner.go:195] Run: cat /version.json
	I1201 20:09:02.588256  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:02.588344  363421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:09:02.588479  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:02.607456  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:02.607749  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:02.769630  363421 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:02.777217  363421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:09:02.819594  363421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:09:02.825242  363421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:09:02.825319  363421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:09:02.834483  363421 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:09:02.834510  363421 start.go:496] detecting cgroup driver to use...
	I1201 20:09:02.834562  363421 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:09:02.834631  363421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:09:02.850900  363421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:09:02.866607  363421 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:09:02.866666  363421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:09:02.885043  363421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:09:02.900602  363421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:09:03.001146  363421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:09:03.104903  363421 docker.go:234] disabling docker service ...
	I1201 20:09:03.104982  363421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:09:03.121947  363421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:09:03.139525  363421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:09:03.252507  363421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:09:03.356626  363421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:09:03.369483  363421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:09:03.383959  363421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:09:03.384018  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.392886  363421 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:09:03.392948  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.402431  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.411640  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.422189  363421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:09:03.432194  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.441678  363421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.450620  363421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:03.460183  363421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:09:03.467584  363421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:09:03.475047  363421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:03.567439  363421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:09:03.699774  363421 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:09:03.699841  363421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:09:03.704895  363421 start.go:564] Will wait 60s for crictl version
	I1201 20:09:03.704954  363421 ssh_runner.go:195] Run: which crictl
	I1201 20:09:03.708839  363421 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:09:03.734207  363421 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:09:03.734306  363421 ssh_runner.go:195] Run: crio --version
	I1201 20:09:03.768401  363421 ssh_runner.go:195] Run: crio --version
	I1201 20:09:03.804334  363421 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1201 20:09:03.580798  358766 out.go:252]   - Configuring RBAC rules ...
	I1201 20:09:03.580985  358766 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1201 20:09:03.585627  358766 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1201 20:09:03.591157  358766 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1201 20:09:03.594557  358766 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1201 20:09:03.596997  358766 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1201 20:09:03.599538  358766 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1201 20:09:03.937260  358766 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1201 20:09:04.355604  358766 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1201 20:09:04.940044  358766 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1201 20:09:04.942081  358766 kubeadm.go:319] 
	I1201 20:09:04.942162  358766 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1201 20:09:04.942172  358766 kubeadm.go:319] 
	I1201 20:09:04.942247  358766 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1201 20:09:04.942273  358766 kubeadm.go:319] 
	I1201 20:09:04.942326  358766 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1201 20:09:04.942401  358766 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1201 20:09:04.942553  358766 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1201 20:09:04.942579  358766 kubeadm.go:319] 
	I1201 20:09:04.942671  358766 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1201 20:09:04.942684  358766 kubeadm.go:319] 
	I1201 20:09:04.942747  358766 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1201 20:09:04.942757  358766 kubeadm.go:319] 
	I1201 20:09:04.942813  358766 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1201 20:09:04.942933  358766 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1201 20:09:04.943117  358766 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1201 20:09:04.943129  358766 kubeadm.go:319] 
	I1201 20:09:04.943301  358766 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1201 20:09:04.943409  358766 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1201 20:09:04.943415  358766 kubeadm.go:319] 
	I1201 20:09:04.943527  358766 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token t6nth9.1dme03npps7xtqxg \
	I1201 20:09:04.943664  358766 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:06dc444a5ab7086c8ab7763a4467d2ea9bcadb3b2da24d1940c0cca14b3cdc8a \
	I1201 20:09:04.943691  358766 kubeadm.go:319] 	--control-plane 
	I1201 20:09:04.943696  358766 kubeadm.go:319] 
	I1201 20:09:04.943806  358766 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1201 20:09:04.943811  358766 kubeadm.go:319] 
	I1201 20:09:04.943935  358766 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token t6nth9.1dme03npps7xtqxg \
	I1201 20:09:04.944090  358766 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:06dc444a5ab7086c8ab7763a4467d2ea9bcadb3b2da24d1940c0cca14b3cdc8a 
	I1201 20:09:04.950014  358766 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1201 20:09:04.950166  358766 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 20:09:04.950195  358766 cni.go:84] Creating CNI manager for ""
	I1201 20:09:04.950204  358766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:04.952428  358766 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1201 20:09:03.805467  363421 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-009682 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:09:03.823590  363421 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1201 20:09:03.827746  363421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:03.838256  363421 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:09:03.838431  363421 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:09:03.838501  363421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:09:03.872004  363421 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:09:03.872030  363421 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:09:03.872101  363421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:09:03.903038  363421 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:09:03.903064  363421 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:09:03.903073  363421 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1201 20:09:03.903222  363421 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-009682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:09:03.903358  363421 ssh_runner.go:195] Run: crio config
	I1201 20:09:03.959717  363421 cni.go:84] Creating CNI manager for ""
	I1201 20:09:03.959751  363421 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:03.959774  363421 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:09:03.959806  363421 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-009682 NodeName:default-k8s-diff-port-009682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:09:03.959960  363421 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-009682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:09:03.960038  363421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:09:03.970035  363421 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:09:03.970088  363421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:09:03.981115  363421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1201 20:09:03.997387  363421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:09:04.013157  363421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1201 20:09:04.026334  363421 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:09:04.029983  363421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:04.040473  363421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:04.126425  363421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:04.157017  363421 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682 for IP: 192.168.103.2
	I1201 20:09:04.157048  363421 certs.go:195] generating shared ca certs ...
	I1201 20:09:04.157075  363421 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:04.157268  363421 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:09:04.157363  363421 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:09:04.157388  363421 certs.go:257] generating profile certs ...
	I1201 20:09:04.157486  363421 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/client.key
	I1201 20:09:04.157547  363421 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key.6e926564
	I1201 20:09:04.157582  363421 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key
	I1201 20:09:04.157719  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:09:04.157763  363421 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:09:04.157774  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:09:04.157807  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:09:04.157844  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:09:04.157878  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:09:04.157927  363421 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:04.158666  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:09:04.181431  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:09:04.214841  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:09:04.239463  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:09:04.265930  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1201 20:09:04.285322  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:09:04.302994  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:09:04.322040  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/default-k8s-diff-port-009682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:09:04.343997  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:09:04.366089  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:09:04.385828  363421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:09:04.403981  363421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:09:04.416528  363421 ssh_runner.go:195] Run: openssl version
	I1201 20:09:04.423168  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:09:04.431851  363421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:09:04.435576  363421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:09:04.435634  363421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:09:04.472014  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:09:04.480631  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:09:04.489567  363421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:04.493837  363421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:04.493903  363421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:04.529237  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:09:04.538935  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:09:04.547861  363421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:09:04.551700  363421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:09:04.551759  363421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:09:04.587866  363421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:09:04.597205  363421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:09:04.600927  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:09:04.636786  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:09:04.673583  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:09:04.727932  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:09:04.773666  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:09:04.824841  363421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:09:04.870082  363421 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-009682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-009682 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:04.870188  363421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:09:04.870248  363421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:09:04.900068  363421 cri.go:89] found id: "ef4ba8d77dd0e9071c7b175fb62f22f9aa86ca30b16bb6d7363c6dc686aac62e"
	I1201 20:09:04.900091  363421 cri.go:89] found id: "b15229721c1e0a47f1f11b128c387218e176a2618444bdeec996eb0d113098d4"
	I1201 20:09:04.900105  363421 cri.go:89] found id: "a1e60ba95082677ce609ab21f3eb49bcc9e9c4f2b4507d8317ccd30fb12c9a8d"
	I1201 20:09:04.900111  363421 cri.go:89] found id: "c037673fa52f79aa510971b202ef75f7b96fdef9c3fc063c32e8c7ef0d11996a"
	I1201 20:09:04.900115  363421 cri.go:89] found id: ""
	I1201 20:09:04.900169  363421 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:09:04.915170  363421 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:04Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:04.915380  363421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:09:04.924568  363421 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:09:04.924589  363421 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:09:04.924636  363421 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:09:04.933995  363421 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:09:04.935868  363421 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-009682" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:04.936660  363421 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-009682" cluster setting kubeconfig missing "default-k8s-diff-port-009682" context setting]
	I1201 20:09:04.937981  363421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:04.940402  363421 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:09:04.953428  363421 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1201 20:09:04.953484  363421 kubeadm.go:602] duration metric: took 28.88936ms to restartPrimaryControlPlane
	I1201 20:09:04.953496  363421 kubeadm.go:403] duration metric: took 83.422203ms to StartCluster
	I1201 20:09:04.953514  363421 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:04.953648  363421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:04.956713  363421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:04.957022  363421 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:09:04.957280  363421 config.go:182] Loaded profile config "default-k8s-diff-port-009682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:09:04.957337  363421 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:09:04.957414  363421 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-009682"
	I1201 20:09:04.957431  363421 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-009682"
	W1201 20:09:04.957439  363421 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:09:04.957463  363421 host.go:66] Checking if "default-k8s-diff-port-009682" exists ...
	I1201 20:09:04.957965  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:04.958147  363421 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-009682"
	I1201 20:09:04.958169  363421 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-009682"
	W1201 20:09:04.958178  363421 addons.go:248] addon dashboard should already be in state true
	I1201 20:09:04.958205  363421 host.go:66] Checking if "default-k8s-diff-port-009682" exists ...
	I1201 20:09:04.958327  363421 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-009682"
	I1201 20:09:04.958364  363421 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-009682"
	I1201 20:09:04.958736  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:04.958772  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:04.962452  363421 out.go:179] * Verifying Kubernetes components...
	W1201 20:09:03.005710  352497 pod_ready.go:104] pod "coredns-7d764666f9-6kzhv" is not "Ready", error: <nil>
	I1201 20:09:03.508253  352497 pod_ready.go:94] pod "coredns-7d764666f9-6kzhv" is "Ready"
	I1201 20:09:03.508310  352497 pod_ready.go:86] duration metric: took 31.508797646s for pod "coredns-7d764666f9-6kzhv" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.512451  352497 pod_ready.go:83] waiting for pod "etcd-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.517492  352497 pod_ready.go:94] pod "etcd-no-preload-240359" is "Ready"
	I1201 20:09:03.517514  352497 pod_ready.go:86] duration metric: took 5.040457ms for pod "etcd-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.519719  352497 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.523698  352497 pod_ready.go:94] pod "kube-apiserver-no-preload-240359" is "Ready"
	I1201 20:09:03.523718  352497 pod_ready.go:86] duration metric: took 3.972027ms for pod "kube-apiserver-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.525515  352497 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.703526  352497 pod_ready.go:94] pod "kube-controller-manager-no-preload-240359" is "Ready"
	I1201 20:09:03.703559  352497 pod_ready.go:86] duration metric: took 178.021828ms for pod "kube-controller-manager-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:03.903891  352497 pod_ready.go:83] waiting for pod "kube-proxy-zbbsb" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:04.304197  352497 pod_ready.go:94] pod "kube-proxy-zbbsb" is "Ready"
	I1201 20:09:04.304226  352497 pod_ready.go:86] duration metric: took 400.309563ms for pod "kube-proxy-zbbsb" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:04.503580  352497 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:04.906218  352497 pod_ready.go:94] pod "kube-scheduler-no-preload-240359" is "Ready"
	I1201 20:09:04.906257  352497 pod_ready.go:86] duration metric: took 402.653219ms for pod "kube-scheduler-no-preload-240359" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:04.906272  352497 pod_ready.go:40] duration metric: took 32.911773572s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:09:04.968561  352497 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:09:04.971433  352497 out.go:179] * Done! kubectl is now configured to use "no-preload-240359" cluster and "default" namespace by default
	I1201 20:09:04.964059  363421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:04.995900  363421 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:09:04.997174  363421 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:09:04.997209  363421 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:09:04.998860  363421 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:04.998888  363421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:09:04.998905  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1201 20:09:04.998920  363421 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1201 20:09:04.998954  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:04.998983  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:04.999136  363421 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-009682"
	W1201 20:09:04.999150  363421 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:09:04.999178  363421 host.go:66] Checking if "default-k8s-diff-port-009682" exists ...
	I1201 20:09:04.999898  363421 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:05.045128  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:05.046144  363421 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:05.046164  363421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:09:05.046223  363421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:05.057331  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:05.078989  363421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:05.178412  363421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:05.205272  363421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:05.209155  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1201 20:09:05.209177  363421 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1201 20:09:05.212411  363421 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-009682" to be "Ready" ...
	I1201 20:09:05.224200  363421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:05.235326  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1201 20:09:05.235354  363421 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1201 20:09:05.271440  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1201 20:09:05.271468  363421 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1201 20:09:05.298205  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1201 20:09:05.298230  363421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1201 20:09:05.323776  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1201 20:09:05.323811  363421 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1201 20:09:05.348888  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1201 20:09:05.348939  363421 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1201 20:09:05.368484  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1201 20:09:05.368507  363421 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1201 20:09:05.396227  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1201 20:09:05.396254  363421 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1201 20:09:05.428995  363421 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:05.429022  363421 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1201 20:09:05.460762  363421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:06.771032  363421 node_ready.go:49] node "default-k8s-diff-port-009682" is "Ready"
	I1201 20:09:06.771062  363421 node_ready.go:38] duration metric: took 1.558615333s for node "default-k8s-diff-port-009682" to be "Ready" ...
	I1201 20:09:06.771088  363421 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:09:06.771140  363421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:09:07.343358  363421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.137932602s)
	I1201 20:09:07.343426  363421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.119182243s)
	I1201 20:09:07.343531  363421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.8827376s)
	I1201 20:09:07.343581  363421 api_server.go:72] duration metric: took 2.386523736s to wait for apiserver process to appear ...
	I1201 20:09:07.343592  363421 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:09:07.343666  363421 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1201 20:09:07.344907  363421 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-009682 addons enable metrics-server
	
	I1201 20:09:07.349323  363421 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1201 20:09:07.349428  363421 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:07.349453  363421 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:07.350469  363421 addons.go:530] duration metric: took 2.393112876s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1201 20:09:04.953734  358766 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1201 20:09:04.963206  358766 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1201 20:09:04.963275  358766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1201 20:09:04.985147  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1201 20:09:05.377592  358766 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1201 20:09:05.377721  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:05.377810  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-456990 minikube.k8s.io/updated_at=2025_12_01T20_09_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9 minikube.k8s.io/name=newest-cni-456990 minikube.k8s.io/primary=true
	I1201 20:09:05.396988  358766 ops.go:34] apiserver oom_adj: -16
	I1201 20:09:05.488649  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:05.988848  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:06.488812  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:06.989508  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:07.489212  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1201 20:09:04.208582  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	W1201 20:09:06.704348  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	I1201 20:09:07.988921  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:08.488969  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:08.989421  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:09.489572  358766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 20:09:09.556897  358766 kubeadm.go:1114] duration metric: took 4.179215442s to wait for elevateKubeSystemPrivileges
	I1201 20:09:09.556925  358766 kubeadm.go:403] duration metric: took 12.057888116s to StartCluster
	I1201 20:09:09.556942  358766 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:09.557018  358766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:09.561139  358766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:09.561442  358766 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:09:09.561528  358766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1201 20:09:09.561526  358766 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:09:09.561616  358766 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456990"
	I1201 20:09:09.561635  358766 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456990"
	I1201 20:09:09.561663  358766 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:09.561697  358766 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456990"
	I1201 20:09:09.561707  358766 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:09.561716  358766 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456990"
	I1201 20:09:09.562001  358766 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:09.562203  358766 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:09.563633  358766 out.go:179] * Verifying Kubernetes components...
	I1201 20:09:09.565146  358766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:09.585957  358766 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456990"
	I1201 20:09:09.585993  358766 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:09.586354  358766 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:09.589424  358766 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:09:09.590905  358766 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:09.590926  358766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:09:09.590986  358766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:09.620117  358766 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:09.620141  358766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:09:09.620204  358766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:09.622564  358766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:09.643761  358766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:09.651851  358766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1201 20:09:09.707356  358766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:09.735698  358766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:09.774797  358766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:09.833543  358766 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1201 20:09:09.835322  358766 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:09:09.835378  358766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:09:10.044020  358766 api_server.go:72] duration metric: took 482.54493ms to wait for apiserver process to appear ...
	I1201 20:09:10.044048  358766 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:09:10.044066  358766 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:10.048749  358766 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1201 20:09:10.049558  358766 api_server.go:141] control plane version: v1.35.0-beta.0
	I1201 20:09:10.049578  358766 api_server.go:131] duration metric: took 5.523573ms to wait for apiserver health ...
	I1201 20:09:10.049586  358766 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:09:10.050178  358766 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1201 20:09:10.051821  358766 addons.go:530] duration metric: took 490.303553ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1201 20:09:10.052035  358766 system_pods.go:59] 8 kube-system pods found
	I1201 20:09:10.052063  358766 system_pods.go:61] "coredns-7d764666f9-6t6ld" [f432ca97-c9f1-42a0-999c-c7b0c90658c1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:10.052076  358766 system_pods.go:61] "etcd-newest-cni-456990" [4ab9e88c-f019-49cb-b3b4-0ca5fe01e5bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:09:10.052094  358766 system_pods.go:61] "kindnet-gbbwm" [7386a806-e262-4de4-827f-fcc08a786840] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1201 20:09:10.052103  358766 system_pods.go:61] "kube-apiserver-newest-cni-456990" [f3b68723-7bb4-4725-9863-334f5bb8e2ac] Running
	I1201 20:09:10.052117  358766 system_pods.go:61] "kube-controller-manager-newest-cni-456990" [105b14f4-dc98-400c-b035-c01fff9181ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:09:10.052128  358766 system_pods.go:61] "kube-proxy-gmbzw" [b60069ca-4117-475a-9a2f-5ecd18fca600] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1201 20:09:10.052138  358766 system_pods.go:61] "kube-scheduler-newest-cni-456990" [d4eea582-e65e-440d-9d3e-05c34228b6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:09:10.052148  358766 system_pods.go:61] "storage-provisioner" [7a437438-9384-461e-9867-0fadcabcfea6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:10.052158  358766 system_pods.go:74] duration metric: took 2.56626ms to wait for pod list to return data ...
	I1201 20:09:10.052170  358766 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:09:10.054122  358766 default_sa.go:45] found service account: "default"
	I1201 20:09:10.054138  358766 default_sa.go:55] duration metric: took 1.961704ms for default service account to be created ...
	I1201 20:09:10.054150  358766 kubeadm.go:587] duration metric: took 492.678996ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:10.054169  358766 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:09:10.056013  358766 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:09:10.056034  358766 node_conditions.go:123] node cpu capacity is 8
	I1201 20:09:10.056055  358766 node_conditions.go:105] duration metric: took 1.88044ms to run NodePressure ...
	I1201 20:09:10.056067  358766 start.go:242] waiting for startup goroutines ...
	I1201 20:09:10.338257  358766 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-456990" context rescaled to 1 replicas
	I1201 20:09:10.338330  358766 start.go:247] waiting for cluster config update ...
	I1201 20:09:10.338346  358766 start.go:256] writing updated cluster config ...
	I1201 20:09:10.338608  358766 ssh_runner.go:195] Run: rm -f paused
	I1201 20:09:10.395956  358766 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:09:10.398166  358766 out.go:179] * Done! kubectl is now configured to use "newest-cni-456990" cluster and "default" namespace by default
	I1201 20:09:07.844328  363421 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1201 20:09:07.848970  363421 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:07.849000  363421 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:08.344445  363421 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1201 20:09:08.349233  363421 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1201 20:09:08.350078  363421 api_server.go:141] control plane version: v1.34.2
	I1201 20:09:08.350100  363421 api_server.go:131] duration metric: took 1.006452276s to wait for apiserver health ...
	I1201 20:09:08.350114  363421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:09:08.353556  363421 system_pods.go:59] 8 kube-system pods found
	I1201 20:09:08.353633  363421 system_pods.go:61] "coredns-66bc5c9577-hf646" [959685f2-3196-405c-b2f8-bb177bd28bcf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:09:08.353649  363421 system_pods.go:61] "etcd-default-k8s-diff-port-009682" [1290bc7e-2b19-417b-b878-8b8866ebd5ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:09:08.353658  363421 system_pods.go:61] "kindnet-pqt6x" [358ffbfc-91b7-4ce9-a3ed-987d5af5abcf] Running
	I1201 20:09:08.353673  363421 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-009682" [8a086238-bc1f-4e44-8953-a0dbb4d3081c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:09:08.353687  363421 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-009682" [ea3a59e8-9da7-4c8c-934a-2f80e1445f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:09:08.353694  363421 system_pods.go:61] "kube-proxy-fjn7h" [f4fdbbdd-f85d-420b-b618-6edfd4259349] Running
	I1201 20:09:08.353708  363421 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-009682" [428d94a5-7a6e-464a-9d09-2b39687d913a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:09:08.353713  363421 system_pods.go:61] "storage-provisioner" [329b9699-cf53-4f5f-b7c3-52f77070a59f] Running
	I1201 20:09:08.353720  363421 system_pods.go:74] duration metric: took 3.593864ms to wait for pod list to return data ...
	I1201 20:09:08.353728  363421 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:09:08.356474  363421 default_sa.go:45] found service account: "default"
	I1201 20:09:08.356492  363421 default_sa.go:55] duration metric: took 2.760154ms for default service account to be created ...
	I1201 20:09:08.356500  363421 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:09:08.358911  363421 system_pods.go:86] 8 kube-system pods found
	I1201 20:09:08.358946  363421 system_pods.go:89] "coredns-66bc5c9577-hf646" [959685f2-3196-405c-b2f8-bb177bd28bcf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:09:08.358959  363421 system_pods.go:89] "etcd-default-k8s-diff-port-009682" [1290bc7e-2b19-417b-b878-8b8866ebd5ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:09:08.358965  363421 system_pods.go:89] "kindnet-pqt6x" [358ffbfc-91b7-4ce9-a3ed-987d5af5abcf] Running
	I1201 20:09:08.358974  363421 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-009682" [8a086238-bc1f-4e44-8953-a0dbb4d3081c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:09:08.358985  363421 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-009682" [ea3a59e8-9da7-4c8c-934a-2f80e1445f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:09:08.358992  363421 system_pods.go:89] "kube-proxy-fjn7h" [f4fdbbdd-f85d-420b-b618-6edfd4259349] Running
	I1201 20:09:08.359000  363421 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-009682" [428d94a5-7a6e-464a-9d09-2b39687d913a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:09:08.359006  363421 system_pods.go:89] "storage-provisioner" [329b9699-cf53-4f5f-b7c3-52f77070a59f] Running
	I1201 20:09:08.359014  363421 system_pods.go:126] duration metric: took 2.508618ms to wait for k8s-apps to be running ...
	I1201 20:09:08.359022  363421 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:09:08.359070  363421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:08.372350  363421 system_svc.go:56] duration metric: took 13.321686ms WaitForService to wait for kubelet
	I1201 20:09:08.372373  363421 kubeadm.go:587] duration metric: took 3.41531784s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:09:08.372389  363421 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:09:08.374954  363421 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:09:08.374984  363421 node_conditions.go:123] node cpu capacity is 8
	I1201 20:09:08.375009  363421 node_conditions.go:105] duration metric: took 2.614763ms to run NodePressure ...
	I1201 20:09:08.375026  363421 start.go:242] waiting for startup goroutines ...
	I1201 20:09:08.375057  363421 start.go:247] waiting for cluster config update ...
	I1201 20:09:08.375067  363421 start.go:256] writing updated cluster config ...
	I1201 20:09:08.375354  363421 ssh_runner.go:195] Run: rm -f paused
	I1201 20:09:08.378839  363421 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:09:08.382028  363421 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hf646" in "kube-system" namespace to be "Ready" or be gone ...
	W1201 20:09:10.389146  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:09.204240  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	W1201 20:09:11.206421  354303 pod_ready.go:104] pod "coredns-66bc5c9577-qngk9" is not "Ready", error: <nil>
	I1201 20:09:13.704548  354303 pod_ready.go:94] pod "coredns-66bc5c9577-qngk9" is "Ready"
	I1201 20:09:13.704575  354303 pod_ready.go:86] duration metric: took 33.505908319s for pod "coredns-66bc5c9577-qngk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:13.707425  354303 pod_ready.go:83] waiting for pod "etcd-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:13.711749  354303 pod_ready.go:94] pod "etcd-embed-certs-990820" is "Ready"
	I1201 20:09:13.711773  354303 pod_ready.go:86] duration metric: took 4.323983ms for pod "etcd-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:13.713928  354303 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:13.717307  354303 pod_ready.go:94] pod "kube-apiserver-embed-certs-990820" is "Ready"
	I1201 20:09:13.717325  354303 pod_ready.go:86] duration metric: took 3.374812ms for pod "kube-apiserver-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:13.719591  354303 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:13.902994  354303 pod_ready.go:94] pod "kube-controller-manager-embed-certs-990820" is "Ready"
	I1201 20:09:13.903023  354303 pod_ready.go:86] duration metric: took 183.37842ms for pod "kube-controller-manager-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:14.102962  354303 pod_ready.go:83] waiting for pod "kube-proxy-t2nmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:14.503430  354303 pod_ready.go:94] pod "kube-proxy-t2nmz" is "Ready"
	I1201 20:09:14.503456  354303 pod_ready.go:86] duration metric: took 400.471194ms for pod "kube-proxy-t2nmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:14.702981  354303 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:15.102882  354303 pod_ready.go:94] pod "kube-scheduler-embed-certs-990820" is "Ready"
	I1201 20:09:15.102914  354303 pod_ready.go:86] duration metric: took 399.904472ms for pod "kube-scheduler-embed-certs-990820" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:15.102929  354303 pod_ready.go:40] duration metric: took 34.974775887s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:09:15.148041  354303 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 20:09:15.149776  354303 out.go:179] * Done! kubectl is now configured to use "embed-certs-990820" cluster and "default" namespace by default
	W1201 20:09:12.888555  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:15.388530  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:17.388819  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 01 20:08:43 no-preload-240359 crio[569]: time="2025-12-01T20:08:43.209735265Z" level=info msg="Started container" PID=1731 containerID=2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6/dashboard-metrics-scraper id=375b63f4-df96-4933-af04-5d40a651f69f name=/runtime.v1.RuntimeService/StartContainer sandboxID=8017bf33122582d1e59f276fc60b3d8d3c26a9dac8e48b29a3fe7329713e84b2
	Dec 01 20:08:44 no-preload-240359 crio[569]: time="2025-12-01T20:08:44.169233746Z" level=info msg="Removing container: d2b4c96946ed8e70164e7bb47617ef1647422cb6c39a123be5e8cdab046738ba" id=befa0468-d474-4215-ade5-6d22ce42c3ec name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:08:44 no-preload-240359 crio[569]: time="2025-12-01T20:08:44.181957192Z" level=info msg="Removed container d2b4c96946ed8e70164e7bb47617ef1647422cb6c39a123be5e8cdab046738ba: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6/dashboard-metrics-scraper" id=befa0468-d474-4215-ade5-6d22ce42c3ec name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.094365655Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=86fe568e-ef9b-41cd-af44-0674c9aa5ff0 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.097016153Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f42c7b61-7de7-48dd-80b8-0e959759494a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.100262046Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6/dashboard-metrics-scraper" id=441139c5-b807-490f-9f55-900846424451 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.10042383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.108077325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.108788707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.134183466Z" level=info msg="Created container cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6/dashboard-metrics-scraper" id=441139c5-b807-490f-9f55-900846424451 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.134823428Z" level=info msg="Starting container: cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603" id=bf996bda-7a24-4b15-b8b6-6be90eb4c1b8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.136897409Z" level=info msg="Started container" PID=1743 containerID=cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6/dashboard-metrics-scraper id=bf996bda-7a24-4b15-b8b6-6be90eb4c1b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8017bf33122582d1e59f276fc60b3d8d3c26a9dac8e48b29a3fe7329713e84b2
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.202084124Z" level=info msg="Removing container: 2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b" id=20b8083a-7af6-4c57-a092-7d9c542dc8ea name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:08:55 no-preload-240359 crio[569]: time="2025-12-01T20:08:55.212673811Z" level=info msg="Removed container 2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6/dashboard-metrics-scraper" id=20b8083a-7af6-4c57-a092-7d9c542dc8ea name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.222273612Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=88dff7a0-6dc9-4b9b-ab59-75107d543af4 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.223316939Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9917d42d-fcf7-41e6-8c1b-0fd17d4f1345 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.224476876Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3e9bfc97-822d-4df4-9917-65f9ad1ee75f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.22472703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.229129325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.229320758Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b8447004c46863bfa1d5ad58a729531f777916fd4fea8e3d868322e1d903e677/merged/etc/passwd: no such file or directory"
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.229352706Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b8447004c46863bfa1d5ad58a729531f777916fd4fea8e3d868322e1d903e677/merged/etc/group: no such file or directory"
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.229558089Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.260231519Z" level=info msg="Created container a765c7d4cc7dc1f4724ff9ae9c28c386601a6256b8f12a3058791b6a4f566457: kube-system/storage-provisioner/storage-provisioner" id=3e9bfc97-822d-4df4-9917-65f9ad1ee75f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.261021546Z" level=info msg="Starting container: a765c7d4cc7dc1f4724ff9ae9c28c386601a6256b8f12a3058791b6a4f566457" id=4dbc5cb9-f143-4a1e-8737-82dac7614d17 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:02 no-preload-240359 crio[569]: time="2025-12-01T20:09:02.262987092Z" level=info msg="Started container" PID=1757 containerID=a765c7d4cc7dc1f4724ff9ae9c28c386601a6256b8f12a3058791b6a4f566457 description=kube-system/storage-provisioner/storage-provisioner id=4dbc5cb9-f143-4a1e-8737-82dac7614d17 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9eab2fd2bff38473f0da79ece9306de53007b2431ebe527cc472142687e387d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a765c7d4cc7dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   f9eab2fd2bff3       storage-provisioner                          kube-system
	cb84798749e15       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   8017bf3312258       dashboard-metrics-scraper-867fb5f87b-fgll6   kubernetes-dashboard
	ba8a74fae657c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   309ebc476bc92       kubernetes-dashboard-b84665fb8-f7grf         kubernetes-dashboard
	a45016736542b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   b75982ce3d8bd       busybox                                      default
	510968b59805a       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           50 seconds ago      Running             coredns                     0                   7ef1387c72e02       coredns-7d764666f9-6kzhv                     kube-system
	2f185801b7d0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   f9eab2fd2bff3       storage-provisioner                          kube-system
	844ba0fcae08d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   32cf85dfd63f4       kindnet-s7r55                                kube-system
	de746e8ab3a57       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           50 seconds ago      Running             kube-proxy                  0                   1241a9545ea16       kube-proxy-zbbsb                             kube-system
	6b752f5fa5d25       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           53 seconds ago      Running             kube-controller-manager     0                   f4094ae54b5e3       kube-controller-manager-no-preload-240359    kube-system
	e49b2d4ba56ef       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           53 seconds ago      Running             etcd                        0                   aa45fbe1b1335       etcd-no-preload-240359                       kube-system
	29cdf91985783       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           53 seconds ago      Running             kube-scheduler              0                   69aad79bb66af       kube-scheduler-no-preload-240359             kube-system
	36005a70764f4       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           53 seconds ago      Running             kube-apiserver              0                   05a86b5e32e16       kube-apiserver-no-preload-240359             kube-system
	
	
	==> coredns [510968b59805a625501e44f964dc5dbaaeca09bb0e1fad75aead446e677e99e2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51640 - 6539 "HINFO IN 2956688600488665119.1571566898641387574. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02349224s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-240359
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-240359
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=no-preload-240359
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_07_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:07:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-240359
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:09:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:09:11 +0000   Mon, 01 Dec 2025 20:07:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:09:11 +0000   Mon, 01 Dec 2025 20:07:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:09:11 +0000   Mon, 01 Dec 2025 20:07:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:09:11 +0000   Mon, 01 Dec 2025 20:07:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-240359
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                061d53b4-7f5d-40c9-8604-f01915628ca1
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-6kzhv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-240359                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-s7r55                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-240359              250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-240359     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-zbbsb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-240359              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-fgll6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-f7grf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node no-preload-240359 event: Registered Node no-preload-240359 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node no-preload-240359 event: Registered Node no-preload-240359 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [e49b2d4ba56ef1c2e40ddb43da58758bdbf5d919d3c69e15fb12ddd94e3859e6] <==
	{"level":"warn","ts":"2025-12-01T20:08:29.694686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.702019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.709509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.716944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.723741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.735210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.741363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.748566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.763495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.770799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.778102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.785680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.793407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.800447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.807834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.814437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.821045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.827884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.835212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.854419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.861250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.868221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.876619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:29.931814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:56.938248Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.019685ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597438231491771 > lease_revoke:<id:06ed9adb87e93420>","response":"size:28"}
	
	
	==> kernel <==
	 20:09:21 up  1:51,  0 user,  load average: 4.01, 3.42, 2.41
	Linux no-preload-240359 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [844ba0fcae08d61a06cb533c6dd7bc40ecb98db5d968faabcf4760594e3545c0] <==
	I1201 20:08:31.691636       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:08:31.691963       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1201 20:08:31.692204       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:08:31.692230       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:08:31.692259       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:08:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:08:31.893472       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:08:31.893545       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:08:31.893560       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:08:31.893757       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:08:32.194650       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:08:32.194682       1 metrics.go:72] Registering metrics
	I1201 20:08:32.194754       1 controller.go:711] "Syncing nftables rules"
	I1201 20:08:41.893448       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 20:08:41.893527       1 main.go:301] handling current node
	I1201 20:08:51.893587       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 20:08:51.893625       1 main.go:301] handling current node
	I1201 20:09:01.893380       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 20:09:01.893411       1 main.go:301] handling current node
	I1201 20:09:11.893685       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 20:09:11.893732       1 main.go:301] handling current node
	I1201 20:09:21.902479       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1201 20:09:21.902511       1 main.go:301] handling current node
	
	
	==> kube-apiserver [36005a70764f454efe8261a6e2c055592d11b2995f54692acfa06be75c01e231] <==
	I1201 20:08:30.433562       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:30.433591       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:30.434375       1 aggregator.go:187] initial CRD sync complete...
	I1201 20:08:30.434388       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 20:08:30.434395       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:08:30.434403       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:08:30.434634       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:30.440045       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1201 20:08:30.440435       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1201 20:08:30.440479       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1201 20:08:30.444806       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1201 20:08:30.449876       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1201 20:08:30.464489       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1201 20:08:30.464627       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:08:30.728449       1 controller.go:667] quota admission added evaluator for: namespaces
	I1201 20:08:30.760483       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:08:30.781929       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:08:30.790021       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:08:30.798415       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:08:30.833802       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.204.247"}
	I1201 20:08:30.845993       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.169.63"}
	I1201 20:08:31.337557       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1201 20:08:34.033919       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:08:34.136892       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:08:34.234714       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6b752f5fa5d255e1175b4bd1269edc34ac8b33b4ccd5fd8ef5ee42c1138e4140] <==
	I1201 20:08:33.595986       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.596010       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.596082       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.596920       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:08:33.597131       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.601778       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602036       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602085       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602179       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.601893       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602252       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602273       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602527       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602305       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602698       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.602736       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.604195       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1201 20:08:33.604321       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-240359"
	I1201 20:08:33.604399       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1201 20:08:33.607802       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.607963       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.697685       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.702853       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:33.702877       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1201 20:08:33.702883       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [de746e8ab3a57e792862aca89bb9e8210ee00df2dcb4ec56548296e6b1618ac7] <==
	I1201 20:08:31.503054       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:08:31.570576       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:08:31.671374       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:31.671405       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1201 20:08:31.671520       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:08:31.693042       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:08:31.693108       1 server_linux.go:136] "Using iptables Proxier"
	I1201 20:08:31.698498       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:08:31.698986       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1201 20:08:31.699053       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:08:31.701330       1 config.go:200] "Starting service config controller"
	I1201 20:08:31.701364       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:08:31.701404       1 config.go:309] "Starting node config controller"
	I1201 20:08:31.701419       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:08:31.701425       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:08:31.701521       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:08:31.701529       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:08:31.701546       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:08:31.701550       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:08:31.801556       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:08:31.801616       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:08:31.801625       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [29cdf919857836c121bb0ca4a31dd8000e82c51bc59f779d45be989f90169f51] <==
	I1201 20:08:28.976817       1 serving.go:386] Generated self-signed cert in-memory
	I1201 20:08:30.402549       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1201 20:08:30.402591       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:08:30.409904       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:08:30.409930       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:08:30.410049       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1201 20:08:30.410132       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:08:30.410103       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1201 20:08:30.410183       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:08:30.410460       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 20:08:30.410757       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 20:08:30.510725       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:30.510868       1 shared_informer.go:377] "Caches are synced"
	I1201 20:08:30.511047       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 01 20:08:44 no-preload-240359 kubelet[722]: E1201 20:08:44.167858     722 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-240359" containerName="kube-scheduler"
	Dec 01 20:08:44 no-preload-240359 kubelet[722]: E1201 20:08:44.167990     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" containerName="dashboard-metrics-scraper"
	Dec 01 20:08:44 no-preload-240359 kubelet[722]: I1201 20:08:44.168018     722 scope.go:122] "RemoveContainer" containerID="2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b"
	Dec 01 20:08:44 no-preload-240359 kubelet[722]: E1201 20:08:44.168220     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-fgll6_kubernetes-dashboard(145da350-1d51-42ff-9118-f36bcf5024a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" podUID="145da350-1d51-42ff-9118-f36bcf5024a2"
	Dec 01 20:08:45 no-preload-240359 kubelet[722]: E1201 20:08:45.172511     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" containerName="dashboard-metrics-scraper"
	Dec 01 20:08:45 no-preload-240359 kubelet[722]: I1201 20:08:45.172547     722 scope.go:122] "RemoveContainer" containerID="2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b"
	Dec 01 20:08:45 no-preload-240359 kubelet[722]: E1201 20:08:45.172768     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-fgll6_kubernetes-dashboard(145da350-1d51-42ff-9118-f36bcf5024a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" podUID="145da350-1d51-42ff-9118-f36bcf5024a2"
	Dec 01 20:08:46 no-preload-240359 kubelet[722]: E1201 20:08:46.175474     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" containerName="dashboard-metrics-scraper"
	Dec 01 20:08:46 no-preload-240359 kubelet[722]: I1201 20:08:46.175512     722 scope.go:122] "RemoveContainer" containerID="2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b"
	Dec 01 20:08:46 no-preload-240359 kubelet[722]: E1201 20:08:46.175711     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-fgll6_kubernetes-dashboard(145da350-1d51-42ff-9118-f36bcf5024a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" podUID="145da350-1d51-42ff-9118-f36bcf5024a2"
	Dec 01 20:08:55 no-preload-240359 kubelet[722]: E1201 20:08:55.093737     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" containerName="dashboard-metrics-scraper"
	Dec 01 20:08:55 no-preload-240359 kubelet[722]: I1201 20:08:55.093779     722 scope.go:122] "RemoveContainer" containerID="2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b"
	Dec 01 20:08:55 no-preload-240359 kubelet[722]: I1201 20:08:55.200603     722 scope.go:122] "RemoveContainer" containerID="2a14d9f0ef0d07e7435cf4796db0c8217149c20f1c8e4085ddc58674998fff0b"
	Dec 01 20:08:55 no-preload-240359 kubelet[722]: E1201 20:08:55.200904     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" containerName="dashboard-metrics-scraper"
	Dec 01 20:08:55 no-preload-240359 kubelet[722]: I1201 20:08:55.200942     722 scope.go:122] "RemoveContainer" containerID="cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603"
	Dec 01 20:08:55 no-preload-240359 kubelet[722]: E1201 20:08:55.201154     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-fgll6_kubernetes-dashboard(145da350-1d51-42ff-9118-f36bcf5024a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" podUID="145da350-1d51-42ff-9118-f36bcf5024a2"
	Dec 01 20:08:56 no-preload-240359 kubelet[722]: E1201 20:08:56.205966     722 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" containerName="dashboard-metrics-scraper"
	Dec 01 20:08:56 no-preload-240359 kubelet[722]: I1201 20:08:56.206021     722 scope.go:122] "RemoveContainer" containerID="cb84798749e15f57ae153e3cbefab2949af6f5e69dad49b7330d6ba1af401603"
	Dec 01 20:08:56 no-preload-240359 kubelet[722]: E1201 20:08:56.206243     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-fgll6_kubernetes-dashboard(145da350-1d51-42ff-9118-f36bcf5024a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-fgll6" podUID="145da350-1d51-42ff-9118-f36bcf5024a2"
	Dec 01 20:09:02 no-preload-240359 kubelet[722]: I1201 20:09:02.221811     722 scope.go:122] "RemoveContainer" containerID="2f185801b7d0fadbf2e0686871d2c9ac6150a3fae2b8fb8f9807e45e9254f1bf"
	Dec 01 20:09:03 no-preload-240359 kubelet[722]: E1201 20:09:03.126752     722 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6kzhv" containerName="coredns"
	Dec 01 20:09:17 no-preload-240359 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 20:09:17 no-preload-240359 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 20:09:17 no-preload-240359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 20:09:17 no-preload-240359 systemd[1]: kubelet.service: Consumed 1.708s CPU time.
	
	
	==> kubernetes-dashboard [ba8a74fae657c1cb17397fdb3a557728f7746032c8530cacd94377d02328e38e] <==
	2025/12/01 20:08:39 Starting overwatch
	2025/12/01 20:08:39 Using namespace: kubernetes-dashboard
	2025/12/01 20:08:39 Using in-cluster config to connect to apiserver
	2025/12/01 20:08:39 Using secret token for csrf signing
	2025/12/01 20:08:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/01 20:08:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/01 20:08:39 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/01 20:08:39 Generating JWE encryption key
	2025/12/01 20:08:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/01 20:08:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/01 20:08:40 Initializing JWE encryption key from synchronized object
	2025/12/01 20:08:40 Creating in-cluster Sidecar client
	2025/12/01 20:08:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/01 20:08:40 Serving insecurely on HTTP port: 9090
	2025/12/01 20:09:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2f185801b7d0fadbf2e0686871d2c9ac6150a3fae2b8fb8f9807e45e9254f1bf] <==
	I1201 20:08:31.468668       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1201 20:09:01.471825       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a765c7d4cc7dc1f4724ff9ae9c28c386601a6256b8f12a3058791b6a4f566457] <==
	I1201 20:09:02.276685       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1201 20:09:02.284953       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1201 20:09:02.285008       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1201 20:09:02.287475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:05.742870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:10.003743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:13.602157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:16.656515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:19.679002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:19.683211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:09:19.683387       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1201 20:09:19.683530       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-240359_8d92b54d-9ac2-4f5f-970d-ad05b7892521!
	I1201 20:09:19.683531       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8fcc8b66-6889-4a42-8e02-82e3bfaf2063", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-240359_8d92b54d-9ac2-4f5f-970d-ad05b7892521 became leader
	W1201 20:09:19.685497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:19.689368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:09:19.783771       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-240359_8d92b54d-9ac2-4f5f-970d-ad05b7892521!
	W1201 20:09:21.693018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:21.697416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-240359 -n no-preload-240359
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-240359 -n no-preload-240359: exit status 2 (368.11968ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-240359 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-990820 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-990820 --alsologtostderr -v=1: exit status 80 (2.548152922s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-990820 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:09:26.907981  371092 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:09:26.908402  371092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:26.908413  371092 out.go:374] Setting ErrFile to fd 2...
	I1201 20:09:26.908417  371092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:26.908619  371092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:09:26.908853  371092 out.go:368] Setting JSON to false
	I1201 20:09:26.908870  371092 mustload.go:66] Loading cluster: embed-certs-990820
	I1201 20:09:26.909208  371092 config.go:182] Loaded profile config "embed-certs-990820": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:09:26.909614  371092 cli_runner.go:164] Run: docker container inspect embed-certs-990820 --format={{.State.Status}}
	I1201 20:09:26.927597  371092 host.go:66] Checking if "embed-certs-990820" exists ...
	I1201 20:09:26.927912  371092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:26.988506  371092 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-01 20:09:26.97762064 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:26.989096  371092 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764600683-21997/minikube-v1.37.0-1764600683-21997-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764600683-21997-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-990820 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1201 20:09:26.990545  371092 out.go:179] * Pausing node embed-certs-990820 ... 
	I1201 20:09:26.992409  371092 host.go:66] Checking if "embed-certs-990820" exists ...
	I1201 20:09:26.992733  371092 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:26.992780  371092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-990820
	I1201 20:09:27.010494  371092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/embed-certs-990820/id_rsa Username:docker}
	I1201 20:09:27.108116  371092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:27.123060  371092 pause.go:52] kubelet running: true
	I1201 20:09:27.123153  371092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:27.288888  371092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:27.288987  371092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:27.357740  371092 cri.go:89] found id: "0e6cbd36339ce6db9e486c296eb36ba49ff15f5f73de17bfbc76fffa6e787cf5"
	I1201 20:09:27.357764  371092 cri.go:89] found id: "4fdb92ed74e9ad10de5bb03824d9222a74a2a1a06678f3199b5801ade9763ad3"
	I1201 20:09:27.357770  371092 cri.go:89] found id: "f929873edd40a65edd646a4ffb3facf2da3c722d6303e0512de077b9d0a68731"
	I1201 20:09:27.357775  371092 cri.go:89] found id: "4c0ffede7147bb388045b457abb0076154baedb2439360e6abf4413300e680b7"
	I1201 20:09:27.357781  371092 cri.go:89] found id: "024187867d4a732b555d4cc18c0d9d9c23da82baa0b6a2c1ca3ec5132724b130"
	I1201 20:09:27.357786  371092 cri.go:89] found id: "584186b54e74d08f4b6af4c9898f57737a8d5d0858f1cf2e7f22fcc29d1d0d0f"
	I1201 20:09:27.357791  371092 cri.go:89] found id: "25d3d677299ebe45e1a5514b80aaf8beaf32d1df3663ce2202e6bb7685a33a0b"
	I1201 20:09:27.357795  371092 cri.go:89] found id: "436c2d3a56ed714769b430e6e9a94e1e0be241f59ee8e5567f0147fc16a8b5af"
	I1201 20:09:27.357799  371092 cri.go:89] found id: "43e75c365156208b44d268aa4b8b8fce1d12a9782bd3c84385daeaddd340cca5"
	I1201 20:09:27.357807  371092 cri.go:89] found id: "b7c139416e64326871c6279cfaedf2114435dcce5cfc3c350e27cccab3f96c03"
	I1201 20:09:27.357812  371092 cri.go:89] found id: "95a5db908d0f958e0f41565e162ada19605efe83675bab81437d84bbf01f16a0"
	I1201 20:09:27.357817  371092 cri.go:89] found id: ""
	I1201 20:09:27.357861  371092 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:27.370404  371092 retry.go:31] will retry after 334.161377ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:27Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:27.704908  371092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:27.717263  371092 pause.go:52] kubelet running: false
	I1201 20:09:27.717330  371092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:27.873662  371092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:27.873746  371092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:27.941921  371092 cri.go:89] found id: "0e6cbd36339ce6db9e486c296eb36ba49ff15f5f73de17bfbc76fffa6e787cf5"
	I1201 20:09:27.941938  371092 cri.go:89] found id: "4fdb92ed74e9ad10de5bb03824d9222a74a2a1a06678f3199b5801ade9763ad3"
	I1201 20:09:27.941943  371092 cri.go:89] found id: "f929873edd40a65edd646a4ffb3facf2da3c722d6303e0512de077b9d0a68731"
	I1201 20:09:27.941946  371092 cri.go:89] found id: "4c0ffede7147bb388045b457abb0076154baedb2439360e6abf4413300e680b7"
	I1201 20:09:27.941949  371092 cri.go:89] found id: "024187867d4a732b555d4cc18c0d9d9c23da82baa0b6a2c1ca3ec5132724b130"
	I1201 20:09:27.941953  371092 cri.go:89] found id: "584186b54e74d08f4b6af4c9898f57737a8d5d0858f1cf2e7f22fcc29d1d0d0f"
	I1201 20:09:27.941955  371092 cri.go:89] found id: "25d3d677299ebe45e1a5514b80aaf8beaf32d1df3663ce2202e6bb7685a33a0b"
	I1201 20:09:27.941958  371092 cri.go:89] found id: "436c2d3a56ed714769b430e6e9a94e1e0be241f59ee8e5567f0147fc16a8b5af"
	I1201 20:09:27.941961  371092 cri.go:89] found id: "43e75c365156208b44d268aa4b8b8fce1d12a9782bd3c84385daeaddd340cca5"
	I1201 20:09:27.941973  371092 cri.go:89] found id: "b7c139416e64326871c6279cfaedf2114435dcce5cfc3c350e27cccab3f96c03"
	I1201 20:09:27.941976  371092 cri.go:89] found id: "95a5db908d0f958e0f41565e162ada19605efe83675bab81437d84bbf01f16a0"
	I1201 20:09:27.941978  371092 cri.go:89] found id: ""
	I1201 20:09:27.942019  371092 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:27.954034  371092 retry.go:31] will retry after 294.831961ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:27Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:28.249453  371092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:28.262756  371092 pause.go:52] kubelet running: false
	I1201 20:09:28.262814  371092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:28.422878  371092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:28.422949  371092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:28.491648  371092 cri.go:89] found id: "0e6cbd36339ce6db9e486c296eb36ba49ff15f5f73de17bfbc76fffa6e787cf5"
	I1201 20:09:28.491674  371092 cri.go:89] found id: "4fdb92ed74e9ad10de5bb03824d9222a74a2a1a06678f3199b5801ade9763ad3"
	I1201 20:09:28.491680  371092 cri.go:89] found id: "f929873edd40a65edd646a4ffb3facf2da3c722d6303e0512de077b9d0a68731"
	I1201 20:09:28.491685  371092 cri.go:89] found id: "4c0ffede7147bb388045b457abb0076154baedb2439360e6abf4413300e680b7"
	I1201 20:09:28.491690  371092 cri.go:89] found id: "024187867d4a732b555d4cc18c0d9d9c23da82baa0b6a2c1ca3ec5132724b130"
	I1201 20:09:28.491702  371092 cri.go:89] found id: "584186b54e74d08f4b6af4c9898f57737a8d5d0858f1cf2e7f22fcc29d1d0d0f"
	I1201 20:09:28.491706  371092 cri.go:89] found id: "25d3d677299ebe45e1a5514b80aaf8beaf32d1df3663ce2202e6bb7685a33a0b"
	I1201 20:09:28.491710  371092 cri.go:89] found id: "436c2d3a56ed714769b430e6e9a94e1e0be241f59ee8e5567f0147fc16a8b5af"
	I1201 20:09:28.491715  371092 cri.go:89] found id: "43e75c365156208b44d268aa4b8b8fce1d12a9782bd3c84385daeaddd340cca5"
	I1201 20:09:28.491722  371092 cri.go:89] found id: "b7c139416e64326871c6279cfaedf2114435dcce5cfc3c350e27cccab3f96c03"
	I1201 20:09:28.491727  371092 cri.go:89] found id: "95a5db908d0f958e0f41565e162ada19605efe83675bab81437d84bbf01f16a0"
	I1201 20:09:28.491731  371092 cri.go:89] found id: ""
	I1201 20:09:28.491782  371092 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:28.505060  371092 retry.go:31] will retry after 543.397331ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:28Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:29.049439  371092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:29.066174  371092 pause.go:52] kubelet running: false
	I1201 20:09:29.066232  371092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:29.267223  371092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:29.267469  371092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:29.366160  371092 cri.go:89] found id: "0e6cbd36339ce6db9e486c296eb36ba49ff15f5f73de17bfbc76fffa6e787cf5"
	I1201 20:09:29.366187  371092 cri.go:89] found id: "4fdb92ed74e9ad10de5bb03824d9222a74a2a1a06678f3199b5801ade9763ad3"
	I1201 20:09:29.366193  371092 cri.go:89] found id: "f929873edd40a65edd646a4ffb3facf2da3c722d6303e0512de077b9d0a68731"
	I1201 20:09:29.366198  371092 cri.go:89] found id: "4c0ffede7147bb388045b457abb0076154baedb2439360e6abf4413300e680b7"
	I1201 20:09:29.366203  371092 cri.go:89] found id: "024187867d4a732b555d4cc18c0d9d9c23da82baa0b6a2c1ca3ec5132724b130"
	I1201 20:09:29.366208  371092 cri.go:89] found id: "584186b54e74d08f4b6af4c9898f57737a8d5d0858f1cf2e7f22fcc29d1d0d0f"
	I1201 20:09:29.366212  371092 cri.go:89] found id: "25d3d677299ebe45e1a5514b80aaf8beaf32d1df3663ce2202e6bb7685a33a0b"
	I1201 20:09:29.366216  371092 cri.go:89] found id: "436c2d3a56ed714769b430e6e9a94e1e0be241f59ee8e5567f0147fc16a8b5af"
	I1201 20:09:29.366221  371092 cri.go:89] found id: "43e75c365156208b44d268aa4b8b8fce1d12a9782bd3c84385daeaddd340cca5"
	I1201 20:09:29.366238  371092 cri.go:89] found id: "b7c139416e64326871c6279cfaedf2114435dcce5cfc3c350e27cccab3f96c03"
	I1201 20:09:29.366242  371092 cri.go:89] found id: "95a5db908d0f958e0f41565e162ada19605efe83675bab81437d84bbf01f16a0"
	I1201 20:09:29.366246  371092 cri.go:89] found id: ""
	I1201 20:09:29.366301  371092 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:29.384157  371092 out.go:203] 
	W1201 20:09:29.385565  371092 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:09:29.385585  371092 out.go:285] * 
	* 
	W1201 20:09:29.392467  371092 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:09:29.393816  371092 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-990820 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-990820
helpers_test.go:243: (dbg) docker inspect embed-certs-990820:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808",
	        "Created": "2025-12-01T20:07:26.934282918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 354634,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:08:28.819181974Z",
	            "FinishedAt": "2025-12-01T20:08:27.747190949Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808/hostname",
	        "HostsPath": "/var/lib/docker/containers/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808/hosts",
	        "LogPath": "/var/lib/docker/containers/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808-json.log",
	        "Name": "/embed-certs-990820",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-990820:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-990820",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808",
	                "LowerDir": "/var/lib/docker/overlay2/52d51afa65c81dabd7f00215b748ac0eae1c8359aa330a8b7abd7675dda733af-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/52d51afa65c81dabd7f00215b748ac0eae1c8359aa330a8b7abd7675dda733af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/52d51afa65c81dabd7f00215b748ac0eae1c8359aa330a8b7abd7675dda733af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/52d51afa65c81dabd7f00215b748ac0eae1c8359aa330a8b7abd7675dda733af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-990820",
	                "Source": "/var/lib/docker/volumes/embed-certs-990820/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-990820",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-990820",
	                "name.minikube.sigs.k8s.io": "embed-certs-990820",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fd4109132cc2c7f1405454c0a21fe16f15719ec96ee9fd7859d3d91bbf775579",
	            "SandboxKey": "/var/run/docker/netns/fd4109132cc2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-990820": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f73505fd10b0a75826b9bbfa88683343c0777746fa3af258502ff4a892fc61da",
	                    "EndpointID": "7b51af0d4b277ef097a3b5f02c24f94319c6bca58d1366d410d7fe134414a675",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "7a:74:40:0f:bb:35",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-990820",
	                        "30c5f9257afd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-990820 -n embed-certs-990820
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-990820 -n embed-certs-990820: exit status 2 (407.415595ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-990820 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-990820 logs -n 25: (1.321752516s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p embed-certs-990820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p embed-certs-990820 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p no-preload-240359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ old-k8s-version-217464 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ pause   │ -p old-k8s-version-217464 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-990820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ stop    │ -p default-k8s-diff-port-009682 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-009682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-456990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-456990 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ no-preload-240359 image list --format=json                                                                                                                                                                                                           │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p no-preload-240359 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-456990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ delete  │ -p no-preload-240359                                                                                                                                                                                                                                 │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p no-preload-240359                                                                                                                                                                                                                                 │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ embed-certs-990820 image list --format=json                                                                                                                                                                                                          │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p embed-certs-990820 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:09:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:09:21.981961  369577 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:09:21.982284  369577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:21.982309  369577 out.go:374] Setting ErrFile to fd 2...
	I1201 20:09:21.982317  369577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:21.982605  369577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:09:21.983126  369577 out.go:368] Setting JSON to false
	I1201 20:09:21.984534  369577 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6713,"bootTime":1764613049,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:09:21.984615  369577 start.go:143] virtualization: kvm guest
	I1201 20:09:21.986551  369577 out.go:179] * [newest-cni-456990] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:09:21.987815  369577 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:09:21.987822  369577 notify.go:221] Checking for updates...
	I1201 20:09:21.989035  369577 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:09:21.990281  369577 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:21.991469  369577 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:09:21.992827  369577 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:09:21.993968  369577 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:09:21.995635  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:21.996324  369577 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:09:22.023631  369577 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:09:22.023759  369577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:22.086345  369577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:09:22.076486449 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:22.086443  369577 docker.go:319] overlay module found
	I1201 20:09:22.088141  369577 out.go:179] * Using the docker driver based on existing profile
	I1201 20:09:22.089326  369577 start.go:309] selected driver: docker
	I1201 20:09:22.089342  369577 start.go:927] validating driver "docker" against &{Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:22.089433  369577 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:09:22.089938  369577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:22.149933  369577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:09:22.139611829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:22.150188  369577 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:22.150214  369577 cni.go:84] Creating CNI manager for ""
	I1201 20:09:22.150268  369577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:22.150340  369577 start.go:353] cluster config:
	{Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:22.151906  369577 out.go:179] * Starting "newest-cni-456990" primary control-plane node in "newest-cni-456990" cluster
	I1201 20:09:22.153186  369577 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:09:22.154362  369577 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:09:22.155412  369577 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:09:22.155527  369577 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1201 20:09:22.171714  369577 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1201 20:09:22.177942  369577 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:09:22.177960  369577 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 20:09:22.189038  369577 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1201 20:09:22.189216  369577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/config.json ...
	I1201 20:09:22.189326  369577 cache.go:107] acquiring lock: {Name:mkfb073f28c5d8c8d3d86356c45c70dd1e2004dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189338  369577 cache.go:107] acquiring lock: {Name:mkc92374151712b4806747490d187953ae21a58d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189371  369577 cache.go:107] acquiring lock: {Name:mk865bd5160866b82c3c4017851803598e1b929c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189422  369577 cache.go:107] acquiring lock: {Name:mk773ed33fa1e8ec1c4c0223e5734faea21632fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189430  369577 cache.go:107] acquiring lock: {Name:mk0738eccef6afbd5daf7149f54edabb749f37f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189489  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1201 20:09:22.189487  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 20:09:22.189498  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 20:09:22.189503  369577 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 136.335µs
	I1201 20:09:22.189503  369577 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 233.665µs
	I1201 20:09:22.189510  369577 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 188.139µs
	I1201 20:09:22.189518  369577 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 20:09:22.189519  369577 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189522  369577 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189439  369577 cache.go:107] acquiring lock: {Name:mk6b5845baaea000a530e17e97a93f47dfb76099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189532  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 20:09:22.189541  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 20:09:22.189501  369577 cache.go:107] acquiring lock: {Name:mk27bccd2c5069a28bfd06c5ca5926da3d72b508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189548  369577 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 129.513µs
	I1201 20:09:22.189552  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 20:09:22.189546  369577 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 174.868µs
	I1201 20:09:22.189560  369577 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 124.115µs
	I1201 20:09:22.189575  369577 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 20:09:22.189562  369577 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189565  369577 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189551  369577 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:09:22.189328  369577 cache.go:107] acquiring lock: {Name:mk11830a92dac1bd25dfa401c24a0b8c4cdadc55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189614  369577 start.go:360] acquireMachinesLock for newest-cni-456990: {Name:mk2627c40ed3bb60b8333e38b64846aaac23401d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189681  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 20:09:22.189693  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 20:09:22.189695  369577 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 374.309µs
	I1201 20:09:22.189705  369577 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 20:09:22.189706  369577 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 254.555µs
	I1201 20:09:22.189708  369577 start.go:364] duration metric: took 76.437µs to acquireMachinesLock for "newest-cni-456990"
	I1201 20:09:22.189717  369577 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 20:09:22.189725  369577 cache.go:87] Successfully saved all images to host disk.
	I1201 20:09:22.189750  369577 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:09:22.189762  369577 fix.go:54] fixHost starting: 
	I1201 20:09:22.190057  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:22.208529  369577 fix.go:112] recreateIfNeeded on newest-cni-456990: state=Stopped err=<nil>
	W1201 20:09:22.208577  369577 fix.go:138] unexpected machine state, will restart: <nil>
	W1201 20:09:19.888195  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:21.888394  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:22.210869  369577 out.go:252] * Restarting existing docker container for "newest-cni-456990" ...
	I1201 20:09:22.210940  369577 cli_runner.go:164] Run: docker start newest-cni-456990
	I1201 20:09:22.483881  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:22.503059  369577 kic.go:430] container "newest-cni-456990" state is running.
	I1201 20:09:22.503442  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:22.523479  369577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/config.json ...
	I1201 20:09:22.523677  369577 machine.go:94] provisionDockerMachine start ...
	I1201 20:09:22.523741  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:22.543913  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:22.544245  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:22.544267  369577 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:09:22.544844  369577 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47222->127.0.0.1:33138: read: connection reset by peer
	I1201 20:09:25.685375  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456990
	
	I1201 20:09:25.685403  369577 ubuntu.go:182] provisioning hostname "newest-cni-456990"
	I1201 20:09:25.685460  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:25.705542  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:25.705781  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:25.705803  369577 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456990 && echo "newest-cni-456990" | sudo tee /etc/hostname
	I1201 20:09:25.852705  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456990
	
	I1201 20:09:25.852773  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:25.871132  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:25.871412  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:25.871435  369577 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456990/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:09:26.010998  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:09:26.011023  369577 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:09:26.011049  369577 ubuntu.go:190] setting up certificates
	I1201 20:09:26.011060  369577 provision.go:84] configureAuth start
	I1201 20:09:26.011120  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:26.029504  369577 provision.go:143] copyHostCerts
	I1201 20:09:26.029554  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:09:26.029562  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:09:26.029637  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:09:26.029768  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:09:26.029778  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:09:26.029805  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:09:26.029875  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:09:26.029882  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:09:26.029905  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:09:26.029963  369577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456990 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-456990]
	I1201 20:09:26.328550  369577 provision.go:177] copyRemoteCerts
	I1201 20:09:26.328608  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:09:26.328639  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.347160  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:26.446331  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:09:26.464001  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:09:26.480946  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 20:09:26.497614  369577 provision.go:87] duration metric: took 486.54109ms to configureAuth
	I1201 20:09:26.497646  369577 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:09:26.497800  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:26.497887  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.515668  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:26.515898  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:26.515922  369577 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:09:26.810418  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:09:26.810446  369577 machine.go:97] duration metric: took 4.28675482s to provisionDockerMachine
	I1201 20:09:26.810460  369577 start.go:293] postStartSetup for "newest-cni-456990" (driver="docker")
	I1201 20:09:26.810476  369577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:09:26.810535  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:09:26.810578  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.830278  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:26.931436  369577 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:09:26.935157  369577 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:09:26.935188  369577 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:09:26.935201  369577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:09:26.935251  369577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:09:26.935381  369577 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:09:26.935506  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:09:26.944725  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:26.965060  369577 start.go:296] duration metric: took 154.584971ms for postStartSetup
	I1201 20:09:26.965147  369577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:09:26.965194  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	W1201 20:09:24.388422  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:26.888750  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:26.987515  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.084060  369577 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:09:27.088479  369577 fix.go:56] duration metric: took 4.898708724s for fixHost
	I1201 20:09:27.088506  369577 start.go:83] releasing machines lock for "newest-cni-456990", held for 4.898783939s
	I1201 20:09:27.088574  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:27.105855  369577 ssh_runner.go:195] Run: cat /version.json
	I1201 20:09:27.105902  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:27.105932  369577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:09:27.106000  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:27.126112  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.126915  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.222363  369577 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:27.278795  369577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:09:27.318224  369577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:09:27.323279  369577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:09:27.323360  369577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:09:27.331855  369577 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:09:27.331879  369577 start.go:496] detecting cgroup driver to use...
	I1201 20:09:27.331910  369577 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:09:27.331955  369577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:09:27.348474  369577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:09:27.362507  369577 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:09:27.362561  369577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:09:27.377474  369577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:09:27.389979  369577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:09:27.468376  369577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:09:27.547053  369577 docker.go:234] disabling docker service ...
	I1201 20:09:27.547113  369577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:09:27.561159  369577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:09:27.573365  369577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:09:27.653350  369577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:09:27.738303  369577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:09:27.751671  369577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:09:27.769449  369577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:09:27.769508  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.778583  369577 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:09:27.778652  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.787603  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.796800  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.805663  369577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:09:27.813756  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.822718  369577 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.831034  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.840425  369577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:09:27.847564  369577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:09:27.854787  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:27.944777  369577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:09:28.086649  369577 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:09:28.086709  369577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:09:28.090736  369577 start.go:564] Will wait 60s for crictl version
	I1201 20:09:28.090798  369577 ssh_runner.go:195] Run: which crictl
	I1201 20:09:28.094303  369577 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:09:28.118835  369577 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:09:28.118914  369577 ssh_runner.go:195] Run: crio --version
	I1201 20:09:28.145870  369577 ssh_runner.go:195] Run: crio --version
	I1201 20:09:28.174675  369577 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 20:09:28.175801  369577 cli_runner.go:164] Run: docker network inspect newest-cni-456990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:09:28.193466  369577 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1201 20:09:28.197584  369577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:28.209396  369577 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1201 20:09:28.210659  369577 kubeadm.go:884] updating cluster {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:09:28.210796  369577 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:09:28.210848  369577 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:09:28.241698  369577 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:09:28.241718  369577 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:09:28.241727  369577 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:09:28.241822  369577 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-456990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:09:28.241897  369577 ssh_runner.go:195] Run: crio config
	I1201 20:09:28.288940  369577 cni.go:84] Creating CNI manager for ""
	I1201 20:09:28.288962  369577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:28.288978  369577 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1201 20:09:28.289003  369577 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456990 NodeName:newest-cni-456990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:09:28.289139  369577 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-456990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:09:28.289213  369577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:09:28.297792  369577 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:09:28.297839  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:09:28.307851  369577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:09:28.324364  369577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:09:28.336458  369577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1201 20:09:28.348629  369577 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:09:28.351983  369577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:28.361836  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:28.448911  369577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:28.474045  369577 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990 for IP: 192.168.76.2
	I1201 20:09:28.474066  369577 certs.go:195] generating shared ca certs ...
	I1201 20:09:28.474085  369577 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:28.474246  369577 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:09:28.474327  369577 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:09:28.474342  369577 certs.go:257] generating profile certs ...
	I1201 20:09:28.474437  369577 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key
	I1201 20:09:28.474521  369577 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757
	I1201 20:09:28.474577  369577 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key
	I1201 20:09:28.474743  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:09:28.474794  369577 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:09:28.474809  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:09:28.474853  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:09:28.474889  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:09:28.474924  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:09:28.474982  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:28.475624  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:09:28.496424  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:09:28.515406  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:09:28.534645  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:09:28.557394  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:09:28.575824  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:09:28.592501  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:09:28.608549  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:09:28.624765  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:09:28.640559  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:09:28.657592  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:09:28.675267  369577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:09:28.686884  369577 ssh_runner.go:195] Run: openssl version
	I1201 20:09:28.692748  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:09:28.700669  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.704098  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.704138  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.737763  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:09:28.746239  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:09:28.754672  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.758325  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.758382  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.794154  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:09:28.802236  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:09:28.810900  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.814671  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.814728  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.849049  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:09:28.857127  369577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:09:28.860939  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:09:28.895833  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:09:28.930763  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:09:28.964635  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:09:29.008623  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:09:29.049534  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:09:29.099499  369577 kubeadm.go:401] StartCluster: {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:29.099618  369577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:09:29.099673  369577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:09:29.150581  369577 cri.go:89] found id: "1417580c3497ca25a45487d338a1cc71b8a58465f823121b513347c144eee2f7"
	I1201 20:09:29.150604  369577 cri.go:89] found id: "daab845ade1685423ec3afbbac2c0687b487468dbfe13e4dd079ef36264b1f9b"
	I1201 20:09:29.150609  369577 cri.go:89] found id: "b6856377ff536b29c4763611ef889cb5638e3ac498d777d3699f2afd790303be"
	I1201 20:09:29.150614  369577 cri.go:89] found id: "392fe0a49d21c21c2e3ae9e47a9b34e7303e162785d520c5de9f5eaec0c8e13b"
	I1201 20:09:29.150618  369577 cri.go:89] found id: ""
	I1201 20:09:29.150664  369577 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:09:29.164173  369577 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:29Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:29.164257  369577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:09:29.173942  369577 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:09:29.173960  369577 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:09:29.174005  369577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:09:29.183058  369577 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:09:29.184150  369577 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-456990" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:29.184912  369577 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-456990" cluster setting kubeconfig missing "newest-cni-456990" context setting]
	I1201 20:09:29.185982  369577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.188022  369577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:09:29.197072  369577 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1201 20:09:29.197113  369577 kubeadm.go:602] duration metric: took 23.134156ms to restartPrimaryControlPlane
	I1201 20:09:29.197123  369577 kubeadm.go:403] duration metric: took 97.633003ms to StartCluster
	I1201 20:09:29.197139  369577 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.197207  369577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:29.199443  369577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.199703  369577 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:09:29.199769  369577 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:09:29.199865  369577 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456990"
	I1201 20:09:29.199885  369577 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456990"
	W1201 20:09:29.199893  369577 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:09:29.199920  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.199928  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:29.199931  369577 addons.go:70] Setting dashboard=true in profile "newest-cni-456990"
	I1201 20:09:29.199951  369577 addons.go:239] Setting addon dashboard=true in "newest-cni-456990"
	W1201 20:09:29.199959  369577 addons.go:248] addon dashboard should already be in state true
	I1201 20:09:29.199970  369577 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456990"
	I1201 20:09:29.199984  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.199985  369577 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456990"
	I1201 20:09:29.200260  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.200479  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.200487  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.201913  369577 out.go:179] * Verifying Kubernetes components...
	I1201 20:09:29.203109  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:29.227872  369577 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:09:29.228002  369577 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:09:29.228898  369577 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456990"
	W1201 20:09:29.228919  369577 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:09:29.228944  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.229409  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.229522  369577 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:29.229537  369577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:09:29.229584  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.230745  369577 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	
	
	==> CRI-O <==
	Dec 01 20:09:01 embed-certs-990820 crio[561]: time="2025-12-01T20:09:01.3700533Z" level=info msg="Started container" PID=1753 containerID=be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z/dashboard-metrics-scraper id=fa016f3f-9075-4962-96dd-2b608ce025f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=933e6ba8c90f7d75332996ab0408b1ac6ae07af3798c1efc76938b28daa951af
	Dec 01 20:09:01 embed-certs-990820 crio[561]: time="2025-12-01T20:09:01.453719381Z" level=info msg="Removing container: d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723" id=7f0660f7-69d3-4a4e-a8ea-80c762f719a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:09:01 embed-certs-990820 crio[561]: time="2025-12-01T20:09:01.464388885Z" level=info msg="Removed container d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z/dashboard-metrics-scraper" id=7f0660f7-69d3-4a4e-a8ea-80c762f719a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.480597333Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=125140c7-8c6f-4ed0-89ad-865a7adfcf2a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.481720497Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=93be1a8b-6433-440e-95c5-a7497ff798b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.483530316Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=505e0290-550c-416f-ba1f-2a00584f9c5f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.483687756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.494826577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.495026299Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8bc61182f0201669bfef487811d548f71fb7ac33dc875f2af405476ab2cdb5a0/merged/etc/passwd: no such file or directory"
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.495063049Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8bc61182f0201669bfef487811d548f71fb7ac33dc875f2af405476ab2cdb5a0/merged/etc/group: no such file or directory"
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.495399496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.532972591Z" level=info msg="Created container 0e6cbd36339ce6db9e486c296eb36ba49ff15f5f73de17bfbc76fffa6e787cf5: kube-system/storage-provisioner/storage-provisioner" id=505e0290-550c-416f-ba1f-2a00584f9c5f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.534403291Z" level=info msg="Starting container: 0e6cbd36339ce6db9e486c296eb36ba49ff15f5f73de17bfbc76fffa6e787cf5" id=d246e87b-a375-45db-a0fe-c357bf25a540 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.537085092Z" level=info msg="Started container" PID=1767 containerID=0e6cbd36339ce6db9e486c296eb36ba49ff15f5f73de17bfbc76fffa6e787cf5 description=kube-system/storage-provisioner/storage-provisioner id=d246e87b-a375-45db-a0fe-c357bf25a540 name=/runtime.v1.RuntimeService/StartContainer sandboxID=75153ae014c9a3e4e272c0358cd024291c58dbdb7324f6f7b4520722caee9d05
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.328646737Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=da7b4ba7-625f-43ec-9c1f-ee49165d4ccc name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.329570616Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b9473eeb-d7b3-465b-be5f-3397919b4d05 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.33065595Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z/dashboard-metrics-scraper" id=4c2f1641-2a95-48a4-9f79-fd0b8be4d0ac name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.330806274Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.337131534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.337650224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.372525332Z" level=info msg="Created container b7c139416e64326871c6279cfaedf2114435dcce5cfc3c350e27cccab3f96c03: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z/dashboard-metrics-scraper" id=4c2f1641-2a95-48a4-9f79-fd0b8be4d0ac name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.373110922Z" level=info msg="Starting container: b7c139416e64326871c6279cfaedf2114435dcce5cfc3c350e27cccab3f96c03" id=d12f9e2d-0f43-48ca-991f-2dbb4197db07 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.374733713Z" level=info msg="Started container" PID=1799 containerID=b7c139416e64326871c6279cfaedf2114435dcce5cfc3c350e27cccab3f96c03 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z/dashboard-metrics-scraper id=d12f9e2d-0f43-48ca-991f-2dbb4197db07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=933e6ba8c90f7d75332996ab0408b1ac6ae07af3798c1efc76938b28daa951af
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.519404945Z" level=info msg="Removing container: be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39" id=841d56de-9a83-41dd-a2a1-49c9ced4eb2b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.531763176Z" level=info msg="Removed container be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z/dashboard-metrics-scraper" id=841d56de-9a83-41dd-a2a1-49c9ced4eb2b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b7c139416e643       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   933e6ba8c90f7       dashboard-metrics-scraper-6ffb444bf9-zd82z   kubernetes-dashboard
	0e6cbd36339ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   75153ae014c9a       storage-provisioner                          kube-system
	95a5db908d0f9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   b3aef1aedb56f       kubernetes-dashboard-855c9754f9-k848d        kubernetes-dashboard
	609de5f088db1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   1750324f85f6b       busybox                                      default
	4fdb92ed74e9a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   094fb8e4d08c6       coredns-66bc5c9577-qngk9                     kube-system
	f929873edd40a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           50 seconds ago      Running             kube-proxy                  0                   fd9694f84e1ac       kube-proxy-t2nmz                             kube-system
	4c0ffede7147b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   b4e275bc96740       kindnet-cpmn4                                kube-system
	024187867d4a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   75153ae014c9a       storage-provisioner                          kube-system
	584186b54e74d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           54 seconds ago      Running             kube-controller-manager     0                   a702defaf40bc       kube-controller-manager-embed-certs-990820   kube-system
	25d3d677299eb       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           54 seconds ago      Running             kube-apiserver              0                   86a731aa3ef11       kube-apiserver-embed-certs-990820            kube-system
	436c2d3a56ed7       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           54 seconds ago      Running             kube-scheduler              0                   e15af61c9ef01       kube-scheduler-embed-certs-990820            kube-system
	43e75c3651562       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   9a7f31254f9a5       etcd-embed-certs-990820                      kube-system
	
	
	==> coredns [4fdb92ed74e9ad10de5bb03824d9222a74a2a1a06678f3199b5801ade9763ad3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35297 - 40627 "HINFO IN 3723764360532052538.304018883644968768. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.021406889s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-990820
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-990820
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=embed-certs-990820
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_07_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:07:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-990820
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:09:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:09:09 +0000   Mon, 01 Dec 2025 20:07:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:09:09 +0000   Mon, 01 Dec 2025 20:07:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:09:09 +0000   Mon, 01 Dec 2025 20:07:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:09:09 +0000   Mon, 01 Dec 2025 20:07:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-990820
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                a8c77f5a-6866-4f6d-8e46-091d133c30f0
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-qngk9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-embed-certs-990820                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-cpmn4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-embed-certs-990820             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-embed-certs-990820    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-t2nmz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-embed-certs-990820             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zd82z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-k848d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  108s               kubelet          Node embed-certs-990820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s               kubelet          Node embed-certs-990820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s               kubelet          Node embed-certs-990820 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node embed-certs-990820 event: Registered Node embed-certs-990820 in Controller
	  Normal  NodeReady                91s                kubelet          Node embed-certs-990820 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node embed-certs-990820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node embed-certs-990820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node embed-certs-990820 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node embed-certs-990820 event: Registered Node embed-certs-990820 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [43e75c365156208b44d268aa4b8b8fce1d12a9782bd3c84385daeaddd340cca5] <==
	{"level":"warn","ts":"2025-12-01T20:08:37.549388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.566843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.579144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.590623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.600214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.613404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.623506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.640036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.652327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.675058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.697644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.709264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.723673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.735243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.749231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.775971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.793218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.799680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.817203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.832252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.852525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.871675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.877702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.967223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35780","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-01T20:08:47.589868Z","caller":"traceutil/trace.go:172","msg":"trace[205967263] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"169.113412ms","start":"2025-12-01T20:08:47.420730Z","end":"2025-12-01T20:08:47.589843Z","steps":["trace[205967263] 'process raft request'  (duration: 168.935782ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:09:30 up  1:52,  0 user,  load average: 3.77, 3.38, 2.40
	Linux embed-certs-990820 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c0ffede7147bb388045b457abb0076154baedb2439360e6abf4413300e680b7] <==
	I1201 20:08:40.022660       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:08:40.022882       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1201 20:08:40.023049       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:08:40.023076       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:08:40.023102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:08:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:08:40.233462       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:08:40.233533       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:08:40.233545       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:08:40.318094       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:08:40.684220       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:08:40.684251       1 metrics.go:72] Registering metrics
	I1201 20:08:40.684344       1 controller.go:711] "Syncing nftables rules"
	I1201 20:08:50.232553       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:08:50.232644       1 main.go:301] handling current node
	I1201 20:09:00.232619       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:09:00.232650       1 main.go:301] handling current node
	I1201 20:09:10.233495       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:09:10.233536       1 main.go:301] handling current node
	I1201 20:09:20.232441       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:09:20.232491       1 main.go:301] handling current node
	I1201 20:09:30.233364       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:09:30.233425       1 main.go:301] handling current node
	
	
	==> kube-apiserver [25d3d677299ebe45e1a5514b80aaf8beaf32d1df3663ce2202e6bb7685a33a0b] <==
	I1201 20:08:38.662540       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 20:08:38.662548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:08:38.662554       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:08:38.662716       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1201 20:08:38.663400       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1201 20:08:38.663654       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1201 20:08:38.664231       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1201 20:08:38.665056       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1201 20:08:38.665162       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1201 20:08:38.665220       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1201 20:08:38.665246       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1201 20:08:38.675141       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1201 20:08:38.684360       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1201 20:08:38.706137       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:08:39.077407       1 controller.go:667] quota admission added evaluator for: namespaces
	I1201 20:08:39.210075       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:08:39.253602       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:08:39.265627       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:08:39.278133       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:08:39.332047       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.129.198"}
	I1201 20:08:39.365919       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.210.223"}
	I1201 20:08:39.566261       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:08:42.558994       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:08:42.609046       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:08:42.659066       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [584186b54e74d08f4b6af4c9898f57737a8d5d0858f1cf2e7f22fcc29d1d0d0f] <==
	I1201 20:08:42.150082       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1201 20:08:42.151175       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1201 20:08:42.153342       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1201 20:08:42.157082       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1201 20:08:42.157122       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1201 20:08:42.157153       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1201 20:08:42.157178       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1201 20:08:42.157162       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1201 20:08:42.157616       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1201 20:08:42.159859       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1201 20:08:42.162256       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1201 20:08:42.162263       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:08:42.163402       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1201 20:08:42.164582       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1201 20:08:42.164646       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 20:08:42.164696       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 20:08:42.164706       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 20:08:42.164712       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 20:08:42.167839       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1201 20:08:42.170043       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1201 20:08:42.171234       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1201 20:08:42.173434       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1201 20:08:42.175716       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1201 20:08:42.177965       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1201 20:08:42.187474       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f929873edd40a65edd646a4ffb3facf2da3c722d6303e0512de077b9d0a68731] <==
	I1201 20:08:39.811113       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:08:39.896775       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 20:08:39.997476       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 20:08:39.997522       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1201 20:08:39.997631       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:08:40.037759       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:08:40.037828       1 server_linux.go:132] "Using iptables Proxier"
	I1201 20:08:40.045949       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:08:40.046737       1 server.go:527] "Version info" version="v1.34.2"
	I1201 20:08:40.046859       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:08:40.049959       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:08:40.051037       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:08:40.050098       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:08:40.055934       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:08:40.050342       1 config.go:200] "Starting service config controller"
	I1201 20:08:40.050088       1 config.go:309] "Starting node config controller"
	I1201 20:08:40.055965       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:08:40.055978       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:08:40.055973       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:08:40.157365       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:08:40.157466       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:08:40.157337       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [436c2d3a56ed714769b430e6e9a94e1e0be241f59ee8e5567f0147fc16a8b5af] <==
	I1201 20:08:37.262940       1 serving.go:386] Generated self-signed cert in-memory
	W1201 20:08:38.584262       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1201 20:08:38.584308       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1201 20:08:38.584328       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1201 20:08:38.584338       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1201 20:08:38.625000       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1201 20:08:38.625029       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:08:38.627172       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:08:38.627216       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:08:38.627534       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 20:08:38.627787       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 20:08:38.728386       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 20:08:43 embed-certs-990820 kubelet[722]: I1201 20:08:43.597776     722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 01 20:08:45 embed-certs-990820 kubelet[722]: I1201 20:08:45.401955     722 scope.go:117] "RemoveContainer" containerID="02a4d51efa8b8a18c4ffe95aeb16841aa1545dd56241a4e078995406982c0c0d"
	Dec 01 20:08:46 embed-certs-990820 kubelet[722]: I1201 20:08:46.406894     722 scope.go:117] "RemoveContainer" containerID="02a4d51efa8b8a18c4ffe95aeb16841aa1545dd56241a4e078995406982c0c0d"
	Dec 01 20:08:46 embed-certs-990820 kubelet[722]: I1201 20:08:46.407054     722 scope.go:117] "RemoveContainer" containerID="d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723"
	Dec 01 20:08:46 embed-certs-990820 kubelet[722]: E1201 20:08:46.407328     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zd82z_kubernetes-dashboard(a1005f4a-e801-4d34-808b-1a12b9e82bf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z" podUID="a1005f4a-e801-4d34-808b-1a12b9e82bf3"
	Dec 01 20:08:47 embed-certs-990820 kubelet[722]: I1201 20:08:47.411997     722 scope.go:117] "RemoveContainer" containerID="d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723"
	Dec 01 20:08:47 embed-certs-990820 kubelet[722]: E1201 20:08:47.412205     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zd82z_kubernetes-dashboard(a1005f4a-e801-4d34-808b-1a12b9e82bf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z" podUID="a1005f4a-e801-4d34-808b-1a12b9e82bf3"
	Dec 01 20:08:49 embed-certs-990820 kubelet[722]: I1201 20:08:49.429980     722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-k848d" podStartSLOduration=1.791904638 podStartE2EDuration="7.429957459s" podCreationTimestamp="2025-12-01 20:08:42 +0000 UTC" firstStartedPulling="2025-12-01 20:08:43.106352777 +0000 UTC m=+7.885641811" lastFinishedPulling="2025-12-01 20:08:48.744405603 +0000 UTC m=+13.523694632" observedRunningTime="2025-12-01 20:08:49.429883654 +0000 UTC m=+14.209172716" watchObservedRunningTime="2025-12-01 20:08:49.429957459 +0000 UTC m=+14.209246501"
	Dec 01 20:08:50 embed-certs-990820 kubelet[722]: I1201 20:08:50.985334     722 scope.go:117] "RemoveContainer" containerID="d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723"
	Dec 01 20:08:50 embed-certs-990820 kubelet[722]: E1201 20:08:50.985508     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zd82z_kubernetes-dashboard(a1005f4a-e801-4d34-808b-1a12b9e82bf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z" podUID="a1005f4a-e801-4d34-808b-1a12b9e82bf3"
	Dec 01 20:09:01 embed-certs-990820 kubelet[722]: I1201 20:09:01.326445     722 scope.go:117] "RemoveContainer" containerID="d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723"
	Dec 01 20:09:01 embed-certs-990820 kubelet[722]: I1201 20:09:01.452479     722 scope.go:117] "RemoveContainer" containerID="d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723"
	Dec 01 20:09:01 embed-certs-990820 kubelet[722]: I1201 20:09:01.452711     722 scope.go:117] "RemoveContainer" containerID="be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39"
	Dec 01 20:09:01 embed-certs-990820 kubelet[722]: E1201 20:09:01.452932     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zd82z_kubernetes-dashboard(a1005f4a-e801-4d34-808b-1a12b9e82bf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z" podUID="a1005f4a-e801-4d34-808b-1a12b9e82bf3"
	Dec 01 20:09:10 embed-certs-990820 kubelet[722]: I1201 20:09:10.479829     722 scope.go:117] "RemoveContainer" containerID="024187867d4a732b555d4cc18c0d9d9c23da82baa0b6a2c1ca3ec5132724b130"
	Dec 01 20:09:10 embed-certs-990820 kubelet[722]: I1201 20:09:10.985655     722 scope.go:117] "RemoveContainer" containerID="be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39"
	Dec 01 20:09:10 embed-certs-990820 kubelet[722]: E1201 20:09:10.985904     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zd82z_kubernetes-dashboard(a1005f4a-e801-4d34-808b-1a12b9e82bf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z" podUID="a1005f4a-e801-4d34-808b-1a12b9e82bf3"
	Dec 01 20:09:23 embed-certs-990820 kubelet[722]: I1201 20:09:23.328185     722 scope.go:117] "RemoveContainer" containerID="be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39"
	Dec 01 20:09:23 embed-certs-990820 kubelet[722]: I1201 20:09:23.518192     722 scope.go:117] "RemoveContainer" containerID="be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39"
	Dec 01 20:09:23 embed-certs-990820 kubelet[722]: I1201 20:09:23.518438     722 scope.go:117] "RemoveContainer" containerID="b7c139416e64326871c6279cfaedf2114435dcce5cfc3c350e27cccab3f96c03"
	Dec 01 20:09:23 embed-certs-990820 kubelet[722]: E1201 20:09:23.518716     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zd82z_kubernetes-dashboard(a1005f4a-e801-4d34-808b-1a12b9e82bf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z" podUID="a1005f4a-e801-4d34-808b-1a12b9e82bf3"
	Dec 01 20:09:27 embed-certs-990820 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 20:09:27 embed-certs-990820 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 20:09:27 embed-certs-990820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 20:09:27 embed-certs-990820 systemd[1]: kubelet.service: Consumed 1.737s CPU time.
	
	
	==> kubernetes-dashboard [95a5db908d0f958e0f41565e162ada19605efe83675bab81437d84bbf01f16a0] <==
	2025/12/01 20:08:48 Using namespace: kubernetes-dashboard
	2025/12/01 20:08:48 Using in-cluster config to connect to apiserver
	2025/12/01 20:08:48 Using secret token for csrf signing
	2025/12/01 20:08:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/01 20:08:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/01 20:08:48 Successful initial request to the apiserver, version: v1.34.2
	2025/12/01 20:08:48 Generating JWE encryption key
	2025/12/01 20:08:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/01 20:08:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/01 20:08:49 Initializing JWE encryption key from synchronized object
	2025/12/01 20:08:49 Creating in-cluster Sidecar client
	2025/12/01 20:08:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/01 20:08:49 Serving insecurely on HTTP port: 9090
	2025/12/01 20:09:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/01 20:08:48 Starting overwatch
	
	
	==> storage-provisioner [024187867d4a732b555d4cc18c0d9d9c23da82baa0b6a2c1ca3ec5132724b130] <==
	I1201 20:08:39.754192       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1201 20:09:09.759706       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [0e6cbd36339ce6db9e486c296eb36ba49ff15f5f73de17bfbc76fffa6e787cf5] <==
	I1201 20:09:10.554938       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1201 20:09:10.565954       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1201 20:09:10.566263       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1201 20:09:10.569646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:14.025372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:18.286395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:21.885960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:24.939338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:27.962457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:27.966692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:09:27.966884       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1201 20:09:27.966944       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eba9bb8f-e5ee-4b48-8968-4ade718acf50", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-990820_7c1eb708-887b-4484-a922-95a3f339c933 became leader
	I1201 20:09:27.967113       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-990820_7c1eb708-887b-4484-a922-95a3f339c933!
	W1201 20:09:27.969244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:27.974700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:09:28.067512       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-990820_7c1eb708-887b-4484-a922-95a3f339c933!
	W1201 20:09:29.978487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:29.984254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-990820 -n embed-certs-990820
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-990820 -n embed-certs-990820: exit status 2 (344.417201ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-990820 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-990820
helpers_test.go:243: (dbg) docker inspect embed-certs-990820:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808",
	        "Created": "2025-12-01T20:07:26.934282918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 354634,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:08:28.819181974Z",
	            "FinishedAt": "2025-12-01T20:08:27.747190949Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808/hostname",
	        "HostsPath": "/var/lib/docker/containers/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808/hosts",
	        "LogPath": "/var/lib/docker/containers/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808/30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808-json.log",
	        "Name": "/embed-certs-990820",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-990820:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-990820",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "30c5f9257afd7ef640eb68ca5a1ac9e0e703c1a9d6dd9d9097aad17a5f155808",
	                "LowerDir": "/var/lib/docker/overlay2/52d51afa65c81dabd7f00215b748ac0eae1c8359aa330a8b7abd7675dda733af-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/52d51afa65c81dabd7f00215b748ac0eae1c8359aa330a8b7abd7675dda733af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/52d51afa65c81dabd7f00215b748ac0eae1c8359aa330a8b7abd7675dda733af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/52d51afa65c81dabd7f00215b748ac0eae1c8359aa330a8b7abd7675dda733af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-990820",
	                "Source": "/var/lib/docker/volumes/embed-certs-990820/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-990820",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-990820",
	                "name.minikube.sigs.k8s.io": "embed-certs-990820",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fd4109132cc2c7f1405454c0a21fe16f15719ec96ee9fd7859d3d91bbf775579",
	            "SandboxKey": "/var/run/docker/netns/fd4109132cc2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-990820": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f73505fd10b0a75826b9bbfa88683343c0777746fa3af258502ff4a892fc61da",
	                    "EndpointID": "7b51af0d4b277ef097a3b5f02c24f94319c6bca58d1366d410d7fe134414a675",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "7a:74:40:0f:bb:35",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-990820",
	                        "30c5f9257afd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-990820 -n embed-certs-990820
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-990820 -n embed-certs-990820: exit status 2 (330.818903ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-990820 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-990820 logs -n 25: (1.167655567s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p embed-certs-990820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ stop    │ -p embed-certs-990820 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p no-preload-240359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ old-k8s-version-217464 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ pause   │ -p old-k8s-version-217464 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-990820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ stop    │ -p default-k8s-diff-port-009682 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-009682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-456990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-456990 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ no-preload-240359 image list --format=json                                                                                                                                                                                                           │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p no-preload-240359 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-456990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p no-preload-240359                                                                                                                                                                                                                                 │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p no-preload-240359                                                                                                                                                                                                                                 │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ embed-certs-990820 image list --format=json                                                                                                                                                                                                          │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p embed-certs-990820 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:09:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:09:21.981961  369577 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:09:21.982284  369577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:21.982309  369577 out.go:374] Setting ErrFile to fd 2...
	I1201 20:09:21.982317  369577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:21.982605  369577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:09:21.983126  369577 out.go:368] Setting JSON to false
	I1201 20:09:21.984534  369577 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6713,"bootTime":1764613049,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:09:21.984615  369577 start.go:143] virtualization: kvm guest
	I1201 20:09:21.986551  369577 out.go:179] * [newest-cni-456990] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:09:21.987815  369577 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:09:21.987822  369577 notify.go:221] Checking for updates...
	I1201 20:09:21.989035  369577 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:09:21.990281  369577 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:21.991469  369577 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:09:21.992827  369577 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:09:21.993968  369577 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:09:21.995635  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:21.996324  369577 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:09:22.023631  369577 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:09:22.023759  369577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:22.086345  369577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:09:22.076486449 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:22.086443  369577 docker.go:319] overlay module found
	I1201 20:09:22.088141  369577 out.go:179] * Using the docker driver based on existing profile
	I1201 20:09:22.089326  369577 start.go:309] selected driver: docker
	I1201 20:09:22.089342  369577 start.go:927] validating driver "docker" against &{Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:22.089433  369577 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:09:22.089938  369577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:22.149933  369577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:09:22.139611829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:22.150188  369577 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:22.150214  369577 cni.go:84] Creating CNI manager for ""
	I1201 20:09:22.150268  369577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:22.150340  369577 start.go:353] cluster config:
	{Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:22.151906  369577 out.go:179] * Starting "newest-cni-456990" primary control-plane node in "newest-cni-456990" cluster
	I1201 20:09:22.153186  369577 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:09:22.154362  369577 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:09:22.155412  369577 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:09:22.155527  369577 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1201 20:09:22.171714  369577 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1201 20:09:22.177942  369577 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:09:22.177960  369577 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 20:09:22.189038  369577 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1201 20:09:22.189216  369577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/config.json ...
	I1201 20:09:22.189326  369577 cache.go:107] acquiring lock: {Name:mkfb073f28c5d8c8d3d86356c45c70dd1e2004dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189338  369577 cache.go:107] acquiring lock: {Name:mkc92374151712b4806747490d187953ae21a58d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189371  369577 cache.go:107] acquiring lock: {Name:mk865bd5160866b82c3c4017851803598e1b929c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189422  369577 cache.go:107] acquiring lock: {Name:mk773ed33fa1e8ec1c4c0223e5734faea21632fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189430  369577 cache.go:107] acquiring lock: {Name:mk0738eccef6afbd5daf7149f54edabb749f37f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189489  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1201 20:09:22.189487  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 20:09:22.189498  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 20:09:22.189503  369577 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 136.335µs
	I1201 20:09:22.189503  369577 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 233.665µs
	I1201 20:09:22.189510  369577 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 188.139µs
	I1201 20:09:22.189518  369577 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 20:09:22.189519  369577 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189522  369577 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189439  369577 cache.go:107] acquiring lock: {Name:mk6b5845baaea000a530e17e97a93f47dfb76099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189532  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 20:09:22.189541  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 20:09:22.189501  369577 cache.go:107] acquiring lock: {Name:mk27bccd2c5069a28bfd06c5ca5926da3d72b508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189548  369577 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 129.513µs
	I1201 20:09:22.189552  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 20:09:22.189546  369577 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 174.868µs
	I1201 20:09:22.189560  369577 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 124.115µs
	I1201 20:09:22.189575  369577 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 20:09:22.189562  369577 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189565  369577 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189551  369577 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:09:22.189328  369577 cache.go:107] acquiring lock: {Name:mk11830a92dac1bd25dfa401c24a0b8c4cdadc55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189614  369577 start.go:360] acquireMachinesLock for newest-cni-456990: {Name:mk2627c40ed3bb60b8333e38b64846aaac23401d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189681  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 20:09:22.189693  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 20:09:22.189695  369577 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 374.309µs
	I1201 20:09:22.189705  369577 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 20:09:22.189706  369577 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 254.555µs
	I1201 20:09:22.189708  369577 start.go:364] duration metric: took 76.437µs to acquireMachinesLock for "newest-cni-456990"
	I1201 20:09:22.189717  369577 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 20:09:22.189725  369577 cache.go:87] Successfully saved all images to host disk.
	I1201 20:09:22.189750  369577 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:09:22.189762  369577 fix.go:54] fixHost starting: 
	I1201 20:09:22.190057  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:22.208529  369577 fix.go:112] recreateIfNeeded on newest-cni-456990: state=Stopped err=<nil>
	W1201 20:09:22.208577  369577 fix.go:138] unexpected machine state, will restart: <nil>
	W1201 20:09:19.888195  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:21.888394  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:22.210869  369577 out.go:252] * Restarting existing docker container for "newest-cni-456990" ...
	I1201 20:09:22.210940  369577 cli_runner.go:164] Run: docker start newest-cni-456990
	I1201 20:09:22.483881  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:22.503059  369577 kic.go:430] container "newest-cni-456990" state is running.
	I1201 20:09:22.503442  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:22.523479  369577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/config.json ...
	I1201 20:09:22.523677  369577 machine.go:94] provisionDockerMachine start ...
	I1201 20:09:22.523741  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:22.543913  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:22.544245  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:22.544267  369577 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:09:22.544844  369577 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47222->127.0.0.1:33138: read: connection reset by peer
	I1201 20:09:25.685375  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456990
	
	I1201 20:09:25.685403  369577 ubuntu.go:182] provisioning hostname "newest-cni-456990"
	I1201 20:09:25.685460  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:25.705542  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:25.705781  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:25.705803  369577 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456990 && echo "newest-cni-456990" | sudo tee /etc/hostname
	I1201 20:09:25.852705  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456990
	
	I1201 20:09:25.852773  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:25.871132  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:25.871412  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:25.871435  369577 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456990/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:09:26.010998  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:09:26.011023  369577 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:09:26.011049  369577 ubuntu.go:190] setting up certificates
	I1201 20:09:26.011060  369577 provision.go:84] configureAuth start
	I1201 20:09:26.011120  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:26.029504  369577 provision.go:143] copyHostCerts
	I1201 20:09:26.029554  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:09:26.029562  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:09:26.029637  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:09:26.029768  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:09:26.029778  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:09:26.029805  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:09:26.029875  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:09:26.029882  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:09:26.029905  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:09:26.029963  369577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456990 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-456990]
	I1201 20:09:26.328550  369577 provision.go:177] copyRemoteCerts
	I1201 20:09:26.328608  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:09:26.328639  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.347160  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:26.446331  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:09:26.464001  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:09:26.480946  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 20:09:26.497614  369577 provision.go:87] duration metric: took 486.54109ms to configureAuth
	I1201 20:09:26.497646  369577 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:09:26.497800  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:26.497887  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.515668  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:26.515898  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:26.515922  369577 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:09:26.810418  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:09:26.810446  369577 machine.go:97] duration metric: took 4.28675482s to provisionDockerMachine
	I1201 20:09:26.810460  369577 start.go:293] postStartSetup for "newest-cni-456990" (driver="docker")
	I1201 20:09:26.810476  369577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:09:26.810535  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:09:26.810578  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.830278  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:26.931436  369577 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:09:26.935157  369577 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:09:26.935188  369577 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:09:26.935201  369577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:09:26.935251  369577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:09:26.935381  369577 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:09:26.935506  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:09:26.944725  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:26.965060  369577 start.go:296] duration metric: took 154.584971ms for postStartSetup
	I1201 20:09:26.965147  369577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:09:26.965194  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	W1201 20:09:24.388422  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:26.888750  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:26.987515  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.084060  369577 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:09:27.088479  369577 fix.go:56] duration metric: took 4.898708724s for fixHost
	I1201 20:09:27.088506  369577 start.go:83] releasing machines lock for "newest-cni-456990", held for 4.898783939s
	I1201 20:09:27.088574  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:27.105855  369577 ssh_runner.go:195] Run: cat /version.json
	I1201 20:09:27.105902  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:27.105932  369577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:09:27.106000  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:27.126112  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.126915  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.222363  369577 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:27.278795  369577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:09:27.318224  369577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:09:27.323279  369577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:09:27.323360  369577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:09:27.331855  369577 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:09:27.331879  369577 start.go:496] detecting cgroup driver to use...
	I1201 20:09:27.331910  369577 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:09:27.331955  369577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:09:27.348474  369577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:09:27.362507  369577 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:09:27.362561  369577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:09:27.377474  369577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:09:27.389979  369577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:09:27.468376  369577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:09:27.547053  369577 docker.go:234] disabling docker service ...
	I1201 20:09:27.547113  369577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:09:27.561159  369577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:09:27.573365  369577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:09:27.653350  369577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:09:27.738303  369577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:09:27.751671  369577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:09:27.769449  369577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:09:27.769508  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.778583  369577 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:09:27.778652  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.787603  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.796800  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.805663  369577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:09:27.813756  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.822718  369577 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.831034  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.840425  369577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:09:27.847564  369577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:09:27.854787  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:27.944777  369577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:09:28.086649  369577 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:09:28.086709  369577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:09:28.090736  369577 start.go:564] Will wait 60s for crictl version
	I1201 20:09:28.090798  369577 ssh_runner.go:195] Run: which crictl
	I1201 20:09:28.094303  369577 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:09:28.118835  369577 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:09:28.118914  369577 ssh_runner.go:195] Run: crio --version
	I1201 20:09:28.145870  369577 ssh_runner.go:195] Run: crio --version
	I1201 20:09:28.174675  369577 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 20:09:28.175801  369577 cli_runner.go:164] Run: docker network inspect newest-cni-456990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:09:28.193466  369577 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1201 20:09:28.197584  369577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:28.209396  369577 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1201 20:09:28.210659  369577 kubeadm.go:884] updating cluster {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:09:28.210796  369577 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:09:28.210848  369577 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:09:28.241698  369577 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:09:28.241718  369577 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:09:28.241727  369577 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:09:28.241822  369577 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-456990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:09:28.241897  369577 ssh_runner.go:195] Run: crio config
	I1201 20:09:28.288940  369577 cni.go:84] Creating CNI manager for ""
	I1201 20:09:28.288962  369577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:28.288978  369577 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1201 20:09:28.289003  369577 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456990 NodeName:newest-cni-456990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:09:28.289139  369577 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-456990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:09:28.289213  369577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:09:28.297792  369577 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:09:28.297839  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:09:28.307851  369577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:09:28.324364  369577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:09:28.336458  369577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1201 20:09:28.348629  369577 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:09:28.351983  369577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:28.361836  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:28.448911  369577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:28.474045  369577 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990 for IP: 192.168.76.2
	I1201 20:09:28.474066  369577 certs.go:195] generating shared ca certs ...
	I1201 20:09:28.474085  369577 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:28.474246  369577 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:09:28.474327  369577 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:09:28.474342  369577 certs.go:257] generating profile certs ...
	I1201 20:09:28.474437  369577 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key
	I1201 20:09:28.474521  369577 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757
	I1201 20:09:28.474577  369577 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key
	I1201 20:09:28.474743  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:09:28.474794  369577 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:09:28.474809  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:09:28.474853  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:09:28.474889  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:09:28.474924  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:09:28.474982  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:28.475624  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:09:28.496424  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:09:28.515406  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:09:28.534645  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:09:28.557394  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:09:28.575824  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:09:28.592501  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:09:28.608549  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:09:28.624765  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:09:28.640559  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:09:28.657592  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:09:28.675267  369577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:09:28.686884  369577 ssh_runner.go:195] Run: openssl version
	I1201 20:09:28.692748  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:09:28.700669  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.704098  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.704138  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.737763  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:09:28.746239  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:09:28.754672  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.758325  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.758382  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.794154  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:09:28.802236  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:09:28.810900  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.814671  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.814728  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.849049  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:09:28.857127  369577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:09:28.860939  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:09:28.895833  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:09:28.930763  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:09:28.964635  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:09:29.008623  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:09:29.049534  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:09:29.099499  369577 kubeadm.go:401] StartCluster: {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:29.099618  369577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:09:29.099673  369577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:09:29.150581  369577 cri.go:89] found id: "1417580c3497ca25a45487d338a1cc71b8a58465f823121b513347c144eee2f7"
	I1201 20:09:29.150604  369577 cri.go:89] found id: "daab845ade1685423ec3afbbac2c0687b487468dbfe13e4dd079ef36264b1f9b"
	I1201 20:09:29.150609  369577 cri.go:89] found id: "b6856377ff536b29c4763611ef889cb5638e3ac498d777d3699f2afd790303be"
	I1201 20:09:29.150614  369577 cri.go:89] found id: "392fe0a49d21c21c2e3ae9e47a9b34e7303e162785d520c5de9f5eaec0c8e13b"
	I1201 20:09:29.150618  369577 cri.go:89] found id: ""
	I1201 20:09:29.150664  369577 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:09:29.164173  369577 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:29Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:29.164257  369577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:09:29.173942  369577 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:09:29.173960  369577 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:09:29.174005  369577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:09:29.183058  369577 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:09:29.184150  369577 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-456990" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:29.184912  369577 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-456990" cluster setting kubeconfig missing "newest-cni-456990" context setting]
	I1201 20:09:29.185982  369577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.188022  369577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:09:29.197072  369577 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1201 20:09:29.197113  369577 kubeadm.go:602] duration metric: took 23.134156ms to restartPrimaryControlPlane
	I1201 20:09:29.197123  369577 kubeadm.go:403] duration metric: took 97.633003ms to StartCluster
	I1201 20:09:29.197139  369577 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.197207  369577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:29.199443  369577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.199703  369577 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:09:29.199769  369577 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:09:29.199865  369577 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456990"
	I1201 20:09:29.199885  369577 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456990"
	W1201 20:09:29.199893  369577 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:09:29.199920  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.199928  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:29.199931  369577 addons.go:70] Setting dashboard=true in profile "newest-cni-456990"
	I1201 20:09:29.199951  369577 addons.go:239] Setting addon dashboard=true in "newest-cni-456990"
	W1201 20:09:29.199959  369577 addons.go:248] addon dashboard should already be in state true
	I1201 20:09:29.199970  369577 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456990"
	I1201 20:09:29.199984  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.199985  369577 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456990"
	I1201 20:09:29.200260  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.200479  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.200487  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.201913  369577 out.go:179] * Verifying Kubernetes components...
	I1201 20:09:29.203109  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:29.227872  369577 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:09:29.228002  369577 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:09:29.228898  369577 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456990"
	W1201 20:09:29.228919  369577 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:09:29.228944  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.229409  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.229522  369577 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:29.229537  369577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:09:29.229584  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.230745  369577 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:09:29.232822  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1201 20:09:29.232838  369577 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1201 20:09:29.232934  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.270464  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.270464  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.271089  369577 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:29.271109  369577 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:09:29.271168  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.299544  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.374473  369577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:29.393341  369577 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:09:29.393411  369577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:09:29.397957  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1201 20:09:29.397976  369577 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1201 20:09:29.401460  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:29.414861  369577 api_server.go:72] duration metric: took 215.119797ms to wait for apiserver process to appear ...
	I1201 20:09:29.414970  369577 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:09:29.415004  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:29.418380  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1201 20:09:29.418401  369577 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1201 20:09:29.422686  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:29.442227  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1201 20:09:29.442256  369577 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1201 20:09:29.462696  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1201 20:09:29.462720  369577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1201 20:09:29.488037  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1201 20:09:29.488054  369577 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1201 20:09:29.503571  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1201 20:09:29.503606  369577 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1201 20:09:29.520206  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1201 20:09:29.520228  369577 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1201 20:09:29.535881  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1201 20:09:29.535904  369577 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1201 20:09:29.552205  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:29.552229  369577 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1201 20:09:29.569173  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:30.447688  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:09:30.447714  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:09:30.447729  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:30.491568  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:09:30.491608  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:09:30.915119  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:30.920667  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:30.920698  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:31.073336  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.671812187s)
	I1201 20:09:31.073416  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.650688692s)
	I1201 20:09:31.073529  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.504317755s)
	I1201 20:09:31.074936  369577 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-456990 addons enable metrics-server
	
	I1201 20:09:31.086132  369577 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1201 20:09:31.087441  369577 addons.go:530] duration metric: took 1.88767322s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1201 20:09:31.415255  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:31.419239  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:31.419264  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:31.915470  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:31.920415  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1201 20:09:31.921522  369577 api_server.go:141] control plane version: v1.35.0-beta.0
	I1201 20:09:31.921546  369577 api_server.go:131] duration metric: took 2.506562046s to wait for apiserver health ...
	I1201 20:09:31.921555  369577 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:09:31.925533  369577 system_pods.go:59] 8 kube-system pods found
	I1201 20:09:31.925565  369577 system_pods.go:61] "coredns-7d764666f9-6t6ld" [f432ca97-c9f1-42a0-999c-c7b0c90658c1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:31.925575  369577 system_pods.go:61] "etcd-newest-cni-456990" [4ab9e88c-f019-49cb-b3b4-0ca5fe01e5bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:09:31.925588  369577 system_pods.go:61] "kindnet-gbbwm" [7386a806-e262-4de4-827f-fcc08a786840] Running
	I1201 20:09:31.925605  369577 system_pods.go:61] "kube-apiserver-newest-cni-456990" [f3b68723-7bb4-4725-9863-334f5bb8e2ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:09:31.925615  369577 system_pods.go:61] "kube-controller-manager-newest-cni-456990" [105b14f4-dc98-400c-b035-c01fff9181ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:09:31.925621  369577 system_pods.go:61] "kube-proxy-gmbzw" [b60069ca-4117-475a-9a2f-5ecd18fca600] Running
	I1201 20:09:31.925634  369577 system_pods.go:61] "kube-scheduler-newest-cni-456990" [d4eea582-e65e-440d-9d3e-05c34228b6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:09:31.925643  369577 system_pods.go:61] "storage-provisioner" [7a437438-9384-461e-9867-0fadcabcfea6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:31.925653  369577 system_pods.go:74] duration metric: took 4.093389ms to wait for pod list to return data ...
	I1201 20:09:31.925664  369577 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:09:31.928075  369577 default_sa.go:45] found service account: "default"
	I1201 20:09:31.928096  369577 default_sa.go:55] duration metric: took 2.423245ms for default service account to be created ...
	I1201 20:09:31.928110  369577 kubeadm.go:587] duration metric: took 2.728376297s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:31.928130  369577 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:09:31.930417  369577 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:09:31.930440  369577 node_conditions.go:123] node cpu capacity is 8
	I1201 20:09:31.930454  369577 node_conditions.go:105] duration metric: took 2.318192ms to run NodePressure ...
	I1201 20:09:31.930467  369577 start.go:242] waiting for startup goroutines ...
	I1201 20:09:31.930480  369577 start.go:247] waiting for cluster config update ...
	I1201 20:09:31.930496  369577 start.go:256] writing updated cluster config ...
	I1201 20:09:31.930881  369577 ssh_runner.go:195] Run: rm -f paused
	I1201 20:09:31.982349  369577 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:09:31.984030  369577 out.go:179] * Done! kubectl is now configured to use "newest-cni-456990" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 01 20:09:01 embed-certs-990820 crio[561]: time="2025-12-01T20:09:01.3700533Z" level=info msg="Started container" PID=1753 containerID=be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z/dashboard-metrics-scraper id=fa016f3f-9075-4962-96dd-2b608ce025f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=933e6ba8c90f7d75332996ab0408b1ac6ae07af3798c1efc76938b28daa951af
	Dec 01 20:09:01 embed-certs-990820 crio[561]: time="2025-12-01T20:09:01.453719381Z" level=info msg="Removing container: d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723" id=7f0660f7-69d3-4a4e-a8ea-80c762f719a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:09:01 embed-certs-990820 crio[561]: time="2025-12-01T20:09:01.464388885Z" level=info msg="Removed container d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z/dashboard-metrics-scraper" id=7f0660f7-69d3-4a4e-a8ea-80c762f719a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.480597333Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=125140c7-8c6f-4ed0-89ad-865a7adfcf2a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.481720497Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=93be1a8b-6433-440e-95c5-a7497ff798b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.483530316Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=505e0290-550c-416f-ba1f-2a00584f9c5f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.483687756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.494826577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.495026299Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8bc61182f0201669bfef487811d548f71fb7ac33dc875f2af405476ab2cdb5a0/merged/etc/passwd: no such file or directory"
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.495063049Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8bc61182f0201669bfef487811d548f71fb7ac33dc875f2af405476ab2cdb5a0/merged/etc/group: no such file or directory"
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.495399496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.532972591Z" level=info msg="Created container 0e6cbd36339ce6db9e486c296eb36ba49ff15f5f73de17bfbc76fffa6e787cf5: kube-system/storage-provisioner/storage-provisioner" id=505e0290-550c-416f-ba1f-2a00584f9c5f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.534403291Z" level=info msg="Starting container: 0e6cbd36339ce6db9e486c296eb36ba49ff15f5f73de17bfbc76fffa6e787cf5" id=d246e87b-a375-45db-a0fe-c357bf25a540 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:10 embed-certs-990820 crio[561]: time="2025-12-01T20:09:10.537085092Z" level=info msg="Started container" PID=1767 containerID=0e6cbd36339ce6db9e486c296eb36ba49ff15f5f73de17bfbc76fffa6e787cf5 description=kube-system/storage-provisioner/storage-provisioner id=d246e87b-a375-45db-a0fe-c357bf25a540 name=/runtime.v1.RuntimeService/StartContainer sandboxID=75153ae014c9a3e4e272c0358cd024291c58dbdb7324f6f7b4520722caee9d05
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.328646737Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=da7b4ba7-625f-43ec-9c1f-ee49165d4ccc name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.329570616Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b9473eeb-d7b3-465b-be5f-3397919b4d05 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.33065595Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z/dashboard-metrics-scraper" id=4c2f1641-2a95-48a4-9f79-fd0b8be4d0ac name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.330806274Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.337131534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.337650224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.372525332Z" level=info msg="Created container b7c139416e64326871c6279cfaedf2114435dcce5cfc3c350e27cccab3f96c03: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z/dashboard-metrics-scraper" id=4c2f1641-2a95-48a4-9f79-fd0b8be4d0ac name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.373110922Z" level=info msg="Starting container: b7c139416e64326871c6279cfaedf2114435dcce5cfc3c350e27cccab3f96c03" id=d12f9e2d-0f43-48ca-991f-2dbb4197db07 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.374733713Z" level=info msg="Started container" PID=1799 containerID=b7c139416e64326871c6279cfaedf2114435dcce5cfc3c350e27cccab3f96c03 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z/dashboard-metrics-scraper id=d12f9e2d-0f43-48ca-991f-2dbb4197db07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=933e6ba8c90f7d75332996ab0408b1ac6ae07af3798c1efc76938b28daa951af
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.519404945Z" level=info msg="Removing container: be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39" id=841d56de-9a83-41dd-a2a1-49c9ced4eb2b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:09:23 embed-certs-990820 crio[561]: time="2025-12-01T20:09:23.531763176Z" level=info msg="Removed container be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z/dashboard-metrics-scraper" id=841d56de-9a83-41dd-a2a1-49c9ced4eb2b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b7c139416e643       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   933e6ba8c90f7       dashboard-metrics-scraper-6ffb444bf9-zd82z   kubernetes-dashboard
	0e6cbd36339ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   75153ae014c9a       storage-provisioner                          kube-system
	95a5db908d0f9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   b3aef1aedb56f       kubernetes-dashboard-855c9754f9-k848d        kubernetes-dashboard
	609de5f088db1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   1750324f85f6b       busybox                                      default
	4fdb92ed74e9a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   094fb8e4d08c6       coredns-66bc5c9577-qngk9                     kube-system
	f929873edd40a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           52 seconds ago      Running             kube-proxy                  0                   fd9694f84e1ac       kube-proxy-t2nmz                             kube-system
	4c0ffede7147b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   b4e275bc96740       kindnet-cpmn4                                kube-system
	024187867d4a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   75153ae014c9a       storage-provisioner                          kube-system
	584186b54e74d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           56 seconds ago      Running             kube-controller-manager     0                   a702defaf40bc       kube-controller-manager-embed-certs-990820   kube-system
	25d3d677299eb       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           56 seconds ago      Running             kube-apiserver              0                   86a731aa3ef11       kube-apiserver-embed-certs-990820            kube-system
	436c2d3a56ed7       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           56 seconds ago      Running             kube-scheduler              0                   e15af61c9ef01       kube-scheduler-embed-certs-990820            kube-system
	43e75c3651562       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   9a7f31254f9a5       etcd-embed-certs-990820                      kube-system
	
	
	==> coredns [4fdb92ed74e9ad10de5bb03824d9222a74a2a1a06678f3199b5801ade9763ad3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35297 - 40627 "HINFO IN 3723764360532052538.304018883644968768. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.021406889s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-990820
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-990820
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=embed-certs-990820
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_07_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:07:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-990820
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:09:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:09:09 +0000   Mon, 01 Dec 2025 20:07:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:09:09 +0000   Mon, 01 Dec 2025 20:07:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:09:09 +0000   Mon, 01 Dec 2025 20:07:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:09:09 +0000   Mon, 01 Dec 2025 20:07:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-990820
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                a8c77f5a-6866-4f6d-8e46-091d133c30f0
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-qngk9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-embed-certs-990820                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-cpmn4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-embed-certs-990820             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-990820    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-t2nmz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-embed-certs-990820             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zd82z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-k848d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node embed-certs-990820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node embed-certs-990820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node embed-certs-990820 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node embed-certs-990820 event: Registered Node embed-certs-990820 in Controller
	  Normal  NodeReady                93s                kubelet          Node embed-certs-990820 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-990820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-990820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node embed-certs-990820 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node embed-certs-990820 event: Registered Node embed-certs-990820 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [43e75c365156208b44d268aa4b8b8fce1d12a9782bd3c84385daeaddd340cca5] <==
	{"level":"warn","ts":"2025-12-01T20:08:37.549388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.566843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.579144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.590623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.600214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.613404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.623506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.640036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.652327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.675058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.697644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.709264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.723673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.735243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.749231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.775971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.793218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.799680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.817203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.832252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.852525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.871675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.877702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:08:37.967223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35780","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-01T20:08:47.589868Z","caller":"traceutil/trace.go:172","msg":"trace[205967263] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"169.113412ms","start":"2025-12-01T20:08:47.420730Z","end":"2025-12-01T20:08:47.589843Z","steps":["trace[205967263] 'process raft request'  (duration: 168.935782ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:09:32 up  1:52,  0 user,  load average: 3.62, 3.36, 2.39
	Linux embed-certs-990820 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c0ffede7147bb388045b457abb0076154baedb2439360e6abf4413300e680b7] <==
	I1201 20:08:40.022660       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:08:40.022882       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1201 20:08:40.023049       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:08:40.023076       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:08:40.023102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:08:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:08:40.233462       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:08:40.233533       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:08:40.233545       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:08:40.318094       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:08:40.684220       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:08:40.684251       1 metrics.go:72] Registering metrics
	I1201 20:08:40.684344       1 controller.go:711] "Syncing nftables rules"
	I1201 20:08:50.232553       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:08:50.232644       1 main.go:301] handling current node
	I1201 20:09:00.232619       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:09:00.232650       1 main.go:301] handling current node
	I1201 20:09:10.233495       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:09:10.233536       1 main.go:301] handling current node
	I1201 20:09:20.232441       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:09:20.232491       1 main.go:301] handling current node
	I1201 20:09:30.233364       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1201 20:09:30.233425       1 main.go:301] handling current node
	
	
	==> kube-apiserver [25d3d677299ebe45e1a5514b80aaf8beaf32d1df3663ce2202e6bb7685a33a0b] <==
	I1201 20:08:38.662540       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 20:08:38.662548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:08:38.662554       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:08:38.662716       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1201 20:08:38.663400       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1201 20:08:38.663654       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1201 20:08:38.664231       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1201 20:08:38.665056       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1201 20:08:38.665162       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1201 20:08:38.665220       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1201 20:08:38.665246       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1201 20:08:38.675141       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1201 20:08:38.684360       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1201 20:08:38.706137       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:08:39.077407       1 controller.go:667] quota admission added evaluator for: namespaces
	I1201 20:08:39.210075       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:08:39.253602       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:08:39.265627       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:08:39.278133       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:08:39.332047       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.129.198"}
	I1201 20:08:39.365919       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.210.223"}
	I1201 20:08:39.566261       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:08:42.558994       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:08:42.609046       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:08:42.659066       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [584186b54e74d08f4b6af4c9898f57737a8d5d0858f1cf2e7f22fcc29d1d0d0f] <==
	I1201 20:08:42.150082       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1201 20:08:42.151175       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1201 20:08:42.153342       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1201 20:08:42.157082       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1201 20:08:42.157122       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1201 20:08:42.157153       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1201 20:08:42.157178       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1201 20:08:42.157162       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1201 20:08:42.157616       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1201 20:08:42.159859       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1201 20:08:42.162256       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1201 20:08:42.162263       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:08:42.163402       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1201 20:08:42.164582       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1201 20:08:42.164646       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 20:08:42.164696       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 20:08:42.164706       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 20:08:42.164712       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 20:08:42.167839       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1201 20:08:42.170043       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1201 20:08:42.171234       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1201 20:08:42.173434       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1201 20:08:42.175716       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1201 20:08:42.177965       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1201 20:08:42.187474       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f929873edd40a65edd646a4ffb3facf2da3c722d6303e0512de077b9d0a68731] <==
	I1201 20:08:39.811113       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:08:39.896775       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 20:08:39.997476       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 20:08:39.997522       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1201 20:08:39.997631       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:08:40.037759       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:08:40.037828       1 server_linux.go:132] "Using iptables Proxier"
	I1201 20:08:40.045949       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:08:40.046737       1 server.go:527] "Version info" version="v1.34.2"
	I1201 20:08:40.046859       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:08:40.049959       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:08:40.051037       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:08:40.050098       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:08:40.055934       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:08:40.050342       1 config.go:200] "Starting service config controller"
	I1201 20:08:40.050088       1 config.go:309] "Starting node config controller"
	I1201 20:08:40.055965       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:08:40.055978       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:08:40.055973       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:08:40.157365       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:08:40.157466       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:08:40.157337       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [436c2d3a56ed714769b430e6e9a94e1e0be241f59ee8e5567f0147fc16a8b5af] <==
	I1201 20:08:37.262940       1 serving.go:386] Generated self-signed cert in-memory
	W1201 20:08:38.584262       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1201 20:08:38.584308       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1201 20:08:38.584328       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1201 20:08:38.584338       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1201 20:08:38.625000       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1201 20:08:38.625029       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:08:38.627172       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:08:38.627216       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:08:38.627534       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 20:08:38.627787       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 20:08:38.728386       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 20:08:43 embed-certs-990820 kubelet[722]: I1201 20:08:43.597776     722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 01 20:08:45 embed-certs-990820 kubelet[722]: I1201 20:08:45.401955     722 scope.go:117] "RemoveContainer" containerID="02a4d51efa8b8a18c4ffe95aeb16841aa1545dd56241a4e078995406982c0c0d"
	Dec 01 20:08:46 embed-certs-990820 kubelet[722]: I1201 20:08:46.406894     722 scope.go:117] "RemoveContainer" containerID="02a4d51efa8b8a18c4ffe95aeb16841aa1545dd56241a4e078995406982c0c0d"
	Dec 01 20:08:46 embed-certs-990820 kubelet[722]: I1201 20:08:46.407054     722 scope.go:117] "RemoveContainer" containerID="d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723"
	Dec 01 20:08:46 embed-certs-990820 kubelet[722]: E1201 20:08:46.407328     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zd82z_kubernetes-dashboard(a1005f4a-e801-4d34-808b-1a12b9e82bf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z" podUID="a1005f4a-e801-4d34-808b-1a12b9e82bf3"
	Dec 01 20:08:47 embed-certs-990820 kubelet[722]: I1201 20:08:47.411997     722 scope.go:117] "RemoveContainer" containerID="d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723"
	Dec 01 20:08:47 embed-certs-990820 kubelet[722]: E1201 20:08:47.412205     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zd82z_kubernetes-dashboard(a1005f4a-e801-4d34-808b-1a12b9e82bf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z" podUID="a1005f4a-e801-4d34-808b-1a12b9e82bf3"
	Dec 01 20:08:49 embed-certs-990820 kubelet[722]: I1201 20:08:49.429980     722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-k848d" podStartSLOduration=1.791904638 podStartE2EDuration="7.429957459s" podCreationTimestamp="2025-12-01 20:08:42 +0000 UTC" firstStartedPulling="2025-12-01 20:08:43.106352777 +0000 UTC m=+7.885641811" lastFinishedPulling="2025-12-01 20:08:48.744405603 +0000 UTC m=+13.523694632" observedRunningTime="2025-12-01 20:08:49.429883654 +0000 UTC m=+14.209172716" watchObservedRunningTime="2025-12-01 20:08:49.429957459 +0000 UTC m=+14.209246501"
	Dec 01 20:08:50 embed-certs-990820 kubelet[722]: I1201 20:08:50.985334     722 scope.go:117] "RemoveContainer" containerID="d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723"
	Dec 01 20:08:50 embed-certs-990820 kubelet[722]: E1201 20:08:50.985508     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zd82z_kubernetes-dashboard(a1005f4a-e801-4d34-808b-1a12b9e82bf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z" podUID="a1005f4a-e801-4d34-808b-1a12b9e82bf3"
	Dec 01 20:09:01 embed-certs-990820 kubelet[722]: I1201 20:09:01.326445     722 scope.go:117] "RemoveContainer" containerID="d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723"
	Dec 01 20:09:01 embed-certs-990820 kubelet[722]: I1201 20:09:01.452479     722 scope.go:117] "RemoveContainer" containerID="d781e1157088cb312a9be8a45baad4f33e7e78c033da4be1784721433b620723"
	Dec 01 20:09:01 embed-certs-990820 kubelet[722]: I1201 20:09:01.452711     722 scope.go:117] "RemoveContainer" containerID="be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39"
	Dec 01 20:09:01 embed-certs-990820 kubelet[722]: E1201 20:09:01.452932     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zd82z_kubernetes-dashboard(a1005f4a-e801-4d34-808b-1a12b9e82bf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z" podUID="a1005f4a-e801-4d34-808b-1a12b9e82bf3"
	Dec 01 20:09:10 embed-certs-990820 kubelet[722]: I1201 20:09:10.479829     722 scope.go:117] "RemoveContainer" containerID="024187867d4a732b555d4cc18c0d9d9c23da82baa0b6a2c1ca3ec5132724b130"
	Dec 01 20:09:10 embed-certs-990820 kubelet[722]: I1201 20:09:10.985655     722 scope.go:117] "RemoveContainer" containerID="be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39"
	Dec 01 20:09:10 embed-certs-990820 kubelet[722]: E1201 20:09:10.985904     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zd82z_kubernetes-dashboard(a1005f4a-e801-4d34-808b-1a12b9e82bf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z" podUID="a1005f4a-e801-4d34-808b-1a12b9e82bf3"
	Dec 01 20:09:23 embed-certs-990820 kubelet[722]: I1201 20:09:23.328185     722 scope.go:117] "RemoveContainer" containerID="be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39"
	Dec 01 20:09:23 embed-certs-990820 kubelet[722]: I1201 20:09:23.518192     722 scope.go:117] "RemoveContainer" containerID="be87601c5cd51d0c6ad6b9bbcf8359257562bbb4d09301a204a8399a5e3e2f39"
	Dec 01 20:09:23 embed-certs-990820 kubelet[722]: I1201 20:09:23.518438     722 scope.go:117] "RemoveContainer" containerID="b7c139416e64326871c6279cfaedf2114435dcce5cfc3c350e27cccab3f96c03"
	Dec 01 20:09:23 embed-certs-990820 kubelet[722]: E1201 20:09:23.518716     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zd82z_kubernetes-dashboard(a1005f4a-e801-4d34-808b-1a12b9e82bf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zd82z" podUID="a1005f4a-e801-4d34-808b-1a12b9e82bf3"
	Dec 01 20:09:27 embed-certs-990820 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 20:09:27 embed-certs-990820 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 20:09:27 embed-certs-990820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 20:09:27 embed-certs-990820 systemd[1]: kubelet.service: Consumed 1.737s CPU time.
	
	
	==> kubernetes-dashboard [95a5db908d0f958e0f41565e162ada19605efe83675bab81437d84bbf01f16a0] <==
	2025/12/01 20:08:48 Using namespace: kubernetes-dashboard
	2025/12/01 20:08:48 Using in-cluster config to connect to apiserver
	2025/12/01 20:08:48 Using secret token for csrf signing
	2025/12/01 20:08:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/01 20:08:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/01 20:08:48 Successful initial request to the apiserver, version: v1.34.2
	2025/12/01 20:08:48 Generating JWE encryption key
	2025/12/01 20:08:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/01 20:08:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/01 20:08:49 Initializing JWE encryption key from synchronized object
	2025/12/01 20:08:49 Creating in-cluster Sidecar client
	2025/12/01 20:08:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/01 20:08:49 Serving insecurely on HTTP port: 9090
	2025/12/01 20:09:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/01 20:08:48 Starting overwatch
	
	
	==> storage-provisioner [024187867d4a732b555d4cc18c0d9d9c23da82baa0b6a2c1ca3ec5132724b130] <==
	I1201 20:08:39.754192       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1201 20:09:09.759706       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [0e6cbd36339ce6db9e486c296eb36ba49ff15f5f73de17bfbc76fffa6e787cf5] <==
	I1201 20:09:10.554938       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1201 20:09:10.565954       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1201 20:09:10.566263       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1201 20:09:10.569646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:14.025372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:18.286395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:21.885960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:24.939338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:27.962457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:27.966692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:09:27.966884       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1201 20:09:27.966944       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eba9bb8f-e5ee-4b48-8968-4ade718acf50", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-990820_7c1eb708-887b-4484-a922-95a3f339c933 became leader
	I1201 20:09:27.967113       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-990820_7c1eb708-887b-4484-a922-95a3f339c933!
	W1201 20:09:27.969244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:27.974700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:09:28.067512       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-990820_7c1eb708-887b-4484-a922-95a3f339c933!
	W1201 20:09:29.978487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:29.984254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:31.988154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:31.992001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-990820 -n embed-certs-990820
E1201 20:09:33.483493   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-990820 -n embed-certs-990820: exit status 2 (344.759841ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-990820 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-456990 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-456990 --alsologtostderr -v=1: exit status 80 (2.467218673s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-456990 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:09:32.677762  373613 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:09:32.678011  373613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:32.678020  373613 out.go:374] Setting ErrFile to fd 2...
	I1201 20:09:32.678028  373613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:32.678241  373613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:09:32.678477  373613 out.go:368] Setting JSON to false
	I1201 20:09:32.678496  373613 mustload.go:66] Loading cluster: newest-cni-456990
	I1201 20:09:32.678880  373613 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:32.679271  373613 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:32.699672  373613 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:32.699994  373613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:32.766349  373613 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-01 20:09:32.753469306 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:32.767126  373613 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764600683-21997/minikube-v1.37.0-1764600683-21997-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764600683-21997-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-456990 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1201 20:09:32.768840  373613 out.go:179] * Pausing node newest-cni-456990 ... 
	I1201 20:09:32.769809  373613 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:32.770111  373613 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:32.770148  373613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:32.789335  373613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:32.890934  373613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:32.905161  373613 pause.go:52] kubelet running: true
	I1201 20:09:32.905229  373613 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:33.049270  373613 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:33.049380  373613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:33.122638  373613 cri.go:89] found id: "f7ebe7c114089b7fd98a77b02336b7b4a7229843afaee59054de20ddd4c39237"
	I1201 20:09:33.122662  373613 cri.go:89] found id: "5ad9374ebdef598af3c0c43e3153f65b169b819e5fb4a9d886372581420d7cdb"
	I1201 20:09:33.122668  373613 cri.go:89] found id: "1417580c3497ca25a45487d338a1cc71b8a58465f823121b513347c144eee2f7"
	I1201 20:09:33.122672  373613 cri.go:89] found id: "daab845ade1685423ec3afbbac2c0687b487468dbfe13e4dd079ef36264b1f9b"
	I1201 20:09:33.122676  373613 cri.go:89] found id: "b6856377ff536b29c4763611ef889cb5638e3ac498d777d3699f2afd790303be"
	I1201 20:09:33.122681  373613 cri.go:89] found id: "392fe0a49d21c21c2e3ae9e47a9b34e7303e162785d520c5de9f5eaec0c8e13b"
	I1201 20:09:33.122685  373613 cri.go:89] found id: ""
	I1201 20:09:33.122749  373613 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:33.135752  373613 retry.go:31] will retry after 255.043868ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:33Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:33.391203  373613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:33.404201  373613 pause.go:52] kubelet running: false
	I1201 20:09:33.404265  373613 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:33.541487  373613 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:33.541587  373613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:33.624852  373613 cri.go:89] found id: "f7ebe7c114089b7fd98a77b02336b7b4a7229843afaee59054de20ddd4c39237"
	I1201 20:09:33.624875  373613 cri.go:89] found id: "5ad9374ebdef598af3c0c43e3153f65b169b819e5fb4a9d886372581420d7cdb"
	I1201 20:09:33.624881  373613 cri.go:89] found id: "1417580c3497ca25a45487d338a1cc71b8a58465f823121b513347c144eee2f7"
	I1201 20:09:33.624886  373613 cri.go:89] found id: "daab845ade1685423ec3afbbac2c0687b487468dbfe13e4dd079ef36264b1f9b"
	I1201 20:09:33.624892  373613 cri.go:89] found id: "b6856377ff536b29c4763611ef889cb5638e3ac498d777d3699f2afd790303be"
	I1201 20:09:33.624898  373613 cri.go:89] found id: "392fe0a49d21c21c2e3ae9e47a9b34e7303e162785d520c5de9f5eaec0c8e13b"
	I1201 20:09:33.624903  373613 cri.go:89] found id: ""
	I1201 20:09:33.624947  373613 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:33.638505  373613 retry.go:31] will retry after 377.329175ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:33Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:34.016103  373613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:34.028507  373613 pause.go:52] kubelet running: false
	I1201 20:09:34.028561  373613 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:34.175194  373613 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:34.175318  373613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:34.241691  373613 cri.go:89] found id: "f7ebe7c114089b7fd98a77b02336b7b4a7229843afaee59054de20ddd4c39237"
	I1201 20:09:34.241712  373613 cri.go:89] found id: "5ad9374ebdef598af3c0c43e3153f65b169b819e5fb4a9d886372581420d7cdb"
	I1201 20:09:34.241716  373613 cri.go:89] found id: "1417580c3497ca25a45487d338a1cc71b8a58465f823121b513347c144eee2f7"
	I1201 20:09:34.241727  373613 cri.go:89] found id: "daab845ade1685423ec3afbbac2c0687b487468dbfe13e4dd079ef36264b1f9b"
	I1201 20:09:34.241730  373613 cri.go:89] found id: "b6856377ff536b29c4763611ef889cb5638e3ac498d777d3699f2afd790303be"
	I1201 20:09:34.241734  373613 cri.go:89] found id: "392fe0a49d21c21c2e3ae9e47a9b34e7303e162785d520c5de9f5eaec0c8e13b"
	I1201 20:09:34.241736  373613 cri.go:89] found id: ""
	I1201 20:09:34.241778  373613 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:34.253769  373613 retry.go:31] will retry after 611.376337ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:34Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:34.865390  373613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:34.878132  373613 pause.go:52] kubelet running: false
	I1201 20:09:34.878212  373613 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:34.992241  373613 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:34.992353  373613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:35.057436  373613 cri.go:89] found id: "f7ebe7c114089b7fd98a77b02336b7b4a7229843afaee59054de20ddd4c39237"
	I1201 20:09:35.057461  373613 cri.go:89] found id: "5ad9374ebdef598af3c0c43e3153f65b169b819e5fb4a9d886372581420d7cdb"
	I1201 20:09:35.057466  373613 cri.go:89] found id: "1417580c3497ca25a45487d338a1cc71b8a58465f823121b513347c144eee2f7"
	I1201 20:09:35.057471  373613 cri.go:89] found id: "daab845ade1685423ec3afbbac2c0687b487468dbfe13e4dd079ef36264b1f9b"
	I1201 20:09:35.057476  373613 cri.go:89] found id: "b6856377ff536b29c4763611ef889cb5638e3ac498d777d3699f2afd790303be"
	I1201 20:09:35.057481  373613 cri.go:89] found id: "392fe0a49d21c21c2e3ae9e47a9b34e7303e162785d520c5de9f5eaec0c8e13b"
	I1201 20:09:35.057487  373613 cri.go:89] found id: ""
	I1201 20:09:35.057530  373613 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:35.072476  373613 out.go:203] 
	W1201 20:09:35.073787  373613 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:09:35.073804  373613 out.go:285] * 
	* 
	W1201 20:09:35.078728  373613 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:09:35.079946  373613 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-456990 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-456990
helpers_test.go:243: (dbg) docker inspect newest-cni-456990:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d",
	        "Created": "2025-12-01T20:08:39.724872977Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 369888,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:09:22.237732733Z",
	            "FinishedAt": "2025-12-01T20:09:21.310829486Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d/hosts",
	        "LogPath": "/var/lib/docker/containers/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d-json.log",
	        "Name": "/newest-cni-456990",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-456990:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-456990",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d",
	                "LowerDir": "/var/lib/docker/overlay2/d5d1301ce923a60ce00166554a60c8b9aae799f69167d17f72e4c243b476ffee-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5d1301ce923a60ce00166554a60c8b9aae799f69167d17f72e4c243b476ffee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5d1301ce923a60ce00166554a60c8b9aae799f69167d17f72e4c243b476ffee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5d1301ce923a60ce00166554a60c8b9aae799f69167d17f72e4c243b476ffee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-456990",
	                "Source": "/var/lib/docker/volumes/newest-cni-456990/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-456990",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-456990",
	                "name.minikube.sigs.k8s.io": "newest-cni-456990",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b7b5e63a31ef4f2ba9a078d92c6ba05082c73e4f3d50c5a95256ab6cb0a2219f",
	            "SandboxKey": "/var/run/docker/netns/b7b5e63a31ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-456990": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6836073b2b5aeb1e24b66aebffccfdf9f8c813eeb874cb4432e2209cabcc4ee5",
	                    "EndpointID": "41613a53958a056219bc33821700d40d6a79b767fc738fb568b8c43a02e2e035",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "62:21:e5:9b:9c:6d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-456990",
	                        "9f5dab6a37e8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-456990 -n newest-cni-456990
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-456990 -n newest-cni-456990: exit status 2 (344.74613ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-456990 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-456990 logs -n 25: (1.004445641s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ old-k8s-version-217464 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ pause   │ -p old-k8s-version-217464 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-990820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ stop    │ -p default-k8s-diff-port-009682 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-009682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-456990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-456990 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ no-preload-240359 image list --format=json                                                                                                                                                                                                           │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p no-preload-240359 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-456990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p no-preload-240359                                                                                                                                                                                                                                 │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p no-preload-240359                                                                                                                                                                                                                                 │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ embed-certs-990820 image list --format=json                                                                                                                                                                                                          │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p embed-certs-990820 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ image   │ newest-cni-456990 image list --format=json                                                                                                                                                                                                           │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p newest-cni-456990 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ delete  │ -p embed-certs-990820                                                                                                                                                                                                                                │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:09:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:09:21.981961  369577 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:09:21.982284  369577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:21.982309  369577 out.go:374] Setting ErrFile to fd 2...
	I1201 20:09:21.982317  369577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:21.982605  369577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:09:21.983126  369577 out.go:368] Setting JSON to false
	I1201 20:09:21.984534  369577 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6713,"bootTime":1764613049,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:09:21.984615  369577 start.go:143] virtualization: kvm guest
	I1201 20:09:21.986551  369577 out.go:179] * [newest-cni-456990] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:09:21.987815  369577 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:09:21.987822  369577 notify.go:221] Checking for updates...
	I1201 20:09:21.989035  369577 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:09:21.990281  369577 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:21.991469  369577 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:09:21.992827  369577 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:09:21.993968  369577 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:09:21.995635  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:21.996324  369577 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:09:22.023631  369577 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:09:22.023759  369577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:22.086345  369577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:09:22.076486449 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:22.086443  369577 docker.go:319] overlay module found
	I1201 20:09:22.088141  369577 out.go:179] * Using the docker driver based on existing profile
	I1201 20:09:22.089326  369577 start.go:309] selected driver: docker
	I1201 20:09:22.089342  369577 start.go:927] validating driver "docker" against &{Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:22.089433  369577 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:09:22.089938  369577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:22.149933  369577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:09:22.139611829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:22.150188  369577 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:22.150214  369577 cni.go:84] Creating CNI manager for ""
	I1201 20:09:22.150268  369577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:22.150340  369577 start.go:353] cluster config:
	{Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:22.151906  369577 out.go:179] * Starting "newest-cni-456990" primary control-plane node in "newest-cni-456990" cluster
	I1201 20:09:22.153186  369577 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:09:22.154362  369577 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:09:22.155412  369577 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:09:22.155527  369577 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1201 20:09:22.171714  369577 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1201 20:09:22.177942  369577 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:09:22.177960  369577 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 20:09:22.189038  369577 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1201 20:09:22.189216  369577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/config.json ...
	I1201 20:09:22.189326  369577 cache.go:107] acquiring lock: {Name:mkfb073f28c5d8c8d3d86356c45c70dd1e2004dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189338  369577 cache.go:107] acquiring lock: {Name:mkc92374151712b4806747490d187953ae21a58d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189371  369577 cache.go:107] acquiring lock: {Name:mk865bd5160866b82c3c4017851803598e1b929c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189422  369577 cache.go:107] acquiring lock: {Name:mk773ed33fa1e8ec1c4c0223e5734faea21632fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189430  369577 cache.go:107] acquiring lock: {Name:mk0738eccef6afbd5daf7149f54edabb749f37f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189489  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1201 20:09:22.189487  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 20:09:22.189498  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 20:09:22.189503  369577 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 136.335µs
	I1201 20:09:22.189503  369577 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 233.665µs
	I1201 20:09:22.189510  369577 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 188.139µs
	I1201 20:09:22.189518  369577 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 20:09:22.189519  369577 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189522  369577 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189439  369577 cache.go:107] acquiring lock: {Name:mk6b5845baaea000a530e17e97a93f47dfb76099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189532  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 20:09:22.189541  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 20:09:22.189501  369577 cache.go:107] acquiring lock: {Name:mk27bccd2c5069a28bfd06c5ca5926da3d72b508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189548  369577 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 129.513µs
	I1201 20:09:22.189552  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 20:09:22.189546  369577 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 174.868µs
	I1201 20:09:22.189560  369577 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 124.115µs
	I1201 20:09:22.189575  369577 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 20:09:22.189562  369577 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189565  369577 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189551  369577 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:09:22.189328  369577 cache.go:107] acquiring lock: {Name:mk11830a92dac1bd25dfa401c24a0b8c4cdadc55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189614  369577 start.go:360] acquireMachinesLock for newest-cni-456990: {Name:mk2627c40ed3bb60b8333e38b64846aaac23401d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189681  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 20:09:22.189693  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 20:09:22.189695  369577 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 374.309µs
	I1201 20:09:22.189705  369577 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 20:09:22.189706  369577 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 254.555µs
	I1201 20:09:22.189708  369577 start.go:364] duration metric: took 76.437µs to acquireMachinesLock for "newest-cni-456990"
	I1201 20:09:22.189717  369577 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 20:09:22.189725  369577 cache.go:87] Successfully saved all images to host disk.
	I1201 20:09:22.189750  369577 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:09:22.189762  369577 fix.go:54] fixHost starting: 
	I1201 20:09:22.190057  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:22.208529  369577 fix.go:112] recreateIfNeeded on newest-cni-456990: state=Stopped err=<nil>
	W1201 20:09:22.208577  369577 fix.go:138] unexpected machine state, will restart: <nil>
	W1201 20:09:19.888195  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:21.888394  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:22.210869  369577 out.go:252] * Restarting existing docker container for "newest-cni-456990" ...
	I1201 20:09:22.210940  369577 cli_runner.go:164] Run: docker start newest-cni-456990
	I1201 20:09:22.483881  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:22.503059  369577 kic.go:430] container "newest-cni-456990" state is running.
	I1201 20:09:22.503442  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:22.523479  369577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/config.json ...
	I1201 20:09:22.523677  369577 machine.go:94] provisionDockerMachine start ...
	I1201 20:09:22.523741  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:22.543913  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:22.544245  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:22.544267  369577 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:09:22.544844  369577 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47222->127.0.0.1:33138: read: connection reset by peer
	I1201 20:09:25.685375  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456990
	
	I1201 20:09:25.685403  369577 ubuntu.go:182] provisioning hostname "newest-cni-456990"
	I1201 20:09:25.685460  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:25.705542  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:25.705781  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:25.705803  369577 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456990 && echo "newest-cni-456990" | sudo tee /etc/hostname
	I1201 20:09:25.852705  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456990
	
	I1201 20:09:25.852773  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:25.871132  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:25.871412  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:25.871435  369577 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456990/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:09:26.010998  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:09:26.011023  369577 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:09:26.011049  369577 ubuntu.go:190] setting up certificates
	I1201 20:09:26.011060  369577 provision.go:84] configureAuth start
	I1201 20:09:26.011120  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:26.029504  369577 provision.go:143] copyHostCerts
	I1201 20:09:26.029554  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:09:26.029562  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:09:26.029637  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:09:26.029768  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:09:26.029778  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:09:26.029805  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:09:26.029875  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:09:26.029882  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:09:26.029905  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:09:26.029963  369577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456990 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-456990]
	I1201 20:09:26.328550  369577 provision.go:177] copyRemoteCerts
	I1201 20:09:26.328608  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:09:26.328639  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.347160  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:26.446331  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:09:26.464001  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:09:26.480946  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 20:09:26.497614  369577 provision.go:87] duration metric: took 486.54109ms to configureAuth
	I1201 20:09:26.497646  369577 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:09:26.497800  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:26.497887  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.515668  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:26.515898  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:26.515922  369577 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:09:26.810418  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:09:26.810446  369577 machine.go:97] duration metric: took 4.28675482s to provisionDockerMachine
	I1201 20:09:26.810460  369577 start.go:293] postStartSetup for "newest-cni-456990" (driver="docker")
	I1201 20:09:26.810476  369577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:09:26.810535  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:09:26.810578  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.830278  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:26.931436  369577 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:09:26.935157  369577 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:09:26.935188  369577 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:09:26.935201  369577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:09:26.935251  369577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:09:26.935381  369577 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:09:26.935506  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:09:26.944725  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:26.965060  369577 start.go:296] duration metric: took 154.584971ms for postStartSetup
	I1201 20:09:26.965147  369577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:09:26.965194  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	W1201 20:09:24.388422  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:26.888750  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:26.987515  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.084060  369577 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:09:27.088479  369577 fix.go:56] duration metric: took 4.898708724s for fixHost
	I1201 20:09:27.088506  369577 start.go:83] releasing machines lock for "newest-cni-456990", held for 4.898783939s
	I1201 20:09:27.088574  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:27.105855  369577 ssh_runner.go:195] Run: cat /version.json
	I1201 20:09:27.105902  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:27.105932  369577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:09:27.106000  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:27.126112  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.126915  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.222363  369577 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:27.278795  369577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:09:27.318224  369577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:09:27.323279  369577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:09:27.323360  369577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:09:27.331855  369577 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:09:27.331879  369577 start.go:496] detecting cgroup driver to use...
	I1201 20:09:27.331910  369577 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:09:27.331955  369577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:09:27.348474  369577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:09:27.362507  369577 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:09:27.362561  369577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:09:27.377474  369577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:09:27.389979  369577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:09:27.468376  369577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:09:27.547053  369577 docker.go:234] disabling docker service ...
	I1201 20:09:27.547113  369577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:09:27.561159  369577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:09:27.573365  369577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:09:27.653350  369577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:09:27.738303  369577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:09:27.751671  369577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:09:27.769449  369577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:09:27.769508  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.778583  369577 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:09:27.778652  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.787603  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.796800  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.805663  369577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:09:27.813756  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.822718  369577 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.831034  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.840425  369577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:09:27.847564  369577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:09:27.854787  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:27.944777  369577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:09:28.086649  369577 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:09:28.086709  369577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:09:28.090736  369577 start.go:564] Will wait 60s for crictl version
	I1201 20:09:28.090798  369577 ssh_runner.go:195] Run: which crictl
	I1201 20:09:28.094303  369577 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:09:28.118835  369577 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:09:28.118914  369577 ssh_runner.go:195] Run: crio --version
	I1201 20:09:28.145870  369577 ssh_runner.go:195] Run: crio --version
	I1201 20:09:28.174675  369577 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 20:09:28.175801  369577 cli_runner.go:164] Run: docker network inspect newest-cni-456990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:09:28.193466  369577 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1201 20:09:28.197584  369577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:28.209396  369577 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1201 20:09:28.210659  369577 kubeadm.go:884] updating cluster {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:09:28.210796  369577 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:09:28.210848  369577 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:09:28.241698  369577 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:09:28.241718  369577 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:09:28.241727  369577 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:09:28.241822  369577 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-456990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:09:28.241897  369577 ssh_runner.go:195] Run: crio config
	I1201 20:09:28.288940  369577 cni.go:84] Creating CNI manager for ""
	I1201 20:09:28.288962  369577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:28.288978  369577 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1201 20:09:28.289003  369577 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456990 NodeName:newest-cni-456990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:09:28.289139  369577 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-456990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:09:28.289213  369577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:09:28.297792  369577 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:09:28.297839  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:09:28.307851  369577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:09:28.324364  369577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:09:28.336458  369577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1201 20:09:28.348629  369577 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:09:28.351983  369577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:28.361836  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:28.448911  369577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:28.474045  369577 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990 for IP: 192.168.76.2
	I1201 20:09:28.474066  369577 certs.go:195] generating shared ca certs ...
	I1201 20:09:28.474085  369577 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:28.474246  369577 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:09:28.474327  369577 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:09:28.474342  369577 certs.go:257] generating profile certs ...
	I1201 20:09:28.474437  369577 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key
	I1201 20:09:28.474521  369577 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757
	I1201 20:09:28.474577  369577 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key
	I1201 20:09:28.474743  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:09:28.474794  369577 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:09:28.474809  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:09:28.474853  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:09:28.474889  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:09:28.474924  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:09:28.474982  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:28.475624  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:09:28.496424  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:09:28.515406  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:09:28.534645  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:09:28.557394  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:09:28.575824  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:09:28.592501  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:09:28.608549  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:09:28.624765  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:09:28.640559  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:09:28.657592  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:09:28.675267  369577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:09:28.686884  369577 ssh_runner.go:195] Run: openssl version
	I1201 20:09:28.692748  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:09:28.700669  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.704098  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.704138  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.737763  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:09:28.746239  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:09:28.754672  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.758325  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.758382  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.794154  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:09:28.802236  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:09:28.810900  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.814671  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.814728  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.849049  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:09:28.857127  369577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:09:28.860939  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:09:28.895833  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:09:28.930763  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:09:28.964635  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:09:29.008623  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:09:29.049534  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:09:29.099499  369577 kubeadm.go:401] StartCluster: {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:29.099618  369577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:09:29.099673  369577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:09:29.150581  369577 cri.go:89] found id: "1417580c3497ca25a45487d338a1cc71b8a58465f823121b513347c144eee2f7"
	I1201 20:09:29.150604  369577 cri.go:89] found id: "daab845ade1685423ec3afbbac2c0687b487468dbfe13e4dd079ef36264b1f9b"
	I1201 20:09:29.150609  369577 cri.go:89] found id: "b6856377ff536b29c4763611ef889cb5638e3ac498d777d3699f2afd790303be"
	I1201 20:09:29.150614  369577 cri.go:89] found id: "392fe0a49d21c21c2e3ae9e47a9b34e7303e162785d520c5de9f5eaec0c8e13b"
	I1201 20:09:29.150618  369577 cri.go:89] found id: ""
	I1201 20:09:29.150664  369577 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:09:29.164173  369577 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:29Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:29.164257  369577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:09:29.173942  369577 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:09:29.173960  369577 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:09:29.174005  369577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:09:29.183058  369577 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:09:29.184150  369577 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-456990" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:29.184912  369577 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-456990" cluster setting kubeconfig missing "newest-cni-456990" context setting]
	I1201 20:09:29.185982  369577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.188022  369577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:09:29.197072  369577 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1201 20:09:29.197113  369577 kubeadm.go:602] duration metric: took 23.134156ms to restartPrimaryControlPlane
	I1201 20:09:29.197123  369577 kubeadm.go:403] duration metric: took 97.633003ms to StartCluster
	I1201 20:09:29.197139  369577 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.197207  369577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:29.199443  369577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.199703  369577 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:09:29.199769  369577 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:09:29.199865  369577 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456990"
	I1201 20:09:29.199885  369577 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456990"
	W1201 20:09:29.199893  369577 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:09:29.199920  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.199928  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:29.199931  369577 addons.go:70] Setting dashboard=true in profile "newest-cni-456990"
	I1201 20:09:29.199951  369577 addons.go:239] Setting addon dashboard=true in "newest-cni-456990"
	W1201 20:09:29.199959  369577 addons.go:248] addon dashboard should already be in state true
	I1201 20:09:29.199970  369577 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456990"
	I1201 20:09:29.199984  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.199985  369577 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456990"
	I1201 20:09:29.200260  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.200479  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.200487  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.201913  369577 out.go:179] * Verifying Kubernetes components...
	I1201 20:09:29.203109  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:29.227872  369577 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:09:29.228002  369577 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:09:29.228898  369577 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456990"
	W1201 20:09:29.228919  369577 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:09:29.228944  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.229409  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.229522  369577 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:29.229537  369577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:09:29.229584  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.230745  369577 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:09:29.232822  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1201 20:09:29.232838  369577 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1201 20:09:29.232934  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.270464  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.270464  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.271089  369577 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:29.271109  369577 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:09:29.271168  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.299544  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.374473  369577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:29.393341  369577 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:09:29.393411  369577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:09:29.397957  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1201 20:09:29.397976  369577 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1201 20:09:29.401460  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:29.414861  369577 api_server.go:72] duration metric: took 215.119797ms to wait for apiserver process to appear ...
	I1201 20:09:29.414970  369577 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:09:29.415004  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:29.418380  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1201 20:09:29.418401  369577 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1201 20:09:29.422686  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:29.442227  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1201 20:09:29.442256  369577 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1201 20:09:29.462696  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1201 20:09:29.462720  369577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1201 20:09:29.488037  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1201 20:09:29.488054  369577 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1201 20:09:29.503571  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1201 20:09:29.503606  369577 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1201 20:09:29.520206  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1201 20:09:29.520228  369577 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1201 20:09:29.535881  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1201 20:09:29.535904  369577 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1201 20:09:29.552205  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:29.552229  369577 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1201 20:09:29.569173  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:30.447688  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:09:30.447714  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:09:30.447729  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:30.491568  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:09:30.491608  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:09:30.915119  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:30.920667  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:30.920698  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:31.073336  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.671812187s)
	I1201 20:09:31.073416  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.650688692s)
	I1201 20:09:31.073529  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.504317755s)
	I1201 20:09:31.074936  369577 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-456990 addons enable metrics-server
	
	I1201 20:09:31.086132  369577 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1201 20:09:31.087441  369577 addons.go:530] duration metric: took 1.88767322s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1201 20:09:31.415255  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:31.419239  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:31.419264  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:31.915470  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:31.920415  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1201 20:09:31.921522  369577 api_server.go:141] control plane version: v1.35.0-beta.0
	I1201 20:09:31.921546  369577 api_server.go:131] duration metric: took 2.506562046s to wait for apiserver health ...
	I1201 20:09:31.921555  369577 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:09:31.925533  369577 system_pods.go:59] 8 kube-system pods found
	I1201 20:09:31.925565  369577 system_pods.go:61] "coredns-7d764666f9-6t6ld" [f432ca97-c9f1-42a0-999c-c7b0c90658c1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:31.925575  369577 system_pods.go:61] "etcd-newest-cni-456990" [4ab9e88c-f019-49cb-b3b4-0ca5fe01e5bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:09:31.925588  369577 system_pods.go:61] "kindnet-gbbwm" [7386a806-e262-4de4-827f-fcc08a786840] Running
	I1201 20:09:31.925605  369577 system_pods.go:61] "kube-apiserver-newest-cni-456990" [f3b68723-7bb4-4725-9863-334f5bb8e2ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:09:31.925615  369577 system_pods.go:61] "kube-controller-manager-newest-cni-456990" [105b14f4-dc98-400c-b035-c01fff9181ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:09:31.925621  369577 system_pods.go:61] "kube-proxy-gmbzw" [b60069ca-4117-475a-9a2f-5ecd18fca600] Running
	I1201 20:09:31.925634  369577 system_pods.go:61] "kube-scheduler-newest-cni-456990" [d4eea582-e65e-440d-9d3e-05c34228b6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:09:31.925643  369577 system_pods.go:61] "storage-provisioner" [7a437438-9384-461e-9867-0fadcabcfea6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:31.925653  369577 system_pods.go:74] duration metric: took 4.093389ms to wait for pod list to return data ...
	I1201 20:09:31.925664  369577 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:09:31.928075  369577 default_sa.go:45] found service account: "default"
	I1201 20:09:31.928096  369577 default_sa.go:55] duration metric: took 2.423245ms for default service account to be created ...
	I1201 20:09:31.928110  369577 kubeadm.go:587] duration metric: took 2.728376297s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:31.928130  369577 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:09:31.930417  369577 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:09:31.930440  369577 node_conditions.go:123] node cpu capacity is 8
	I1201 20:09:31.930454  369577 node_conditions.go:105] duration metric: took 2.318192ms to run NodePressure ...
	I1201 20:09:31.930467  369577 start.go:242] waiting for startup goroutines ...
	I1201 20:09:31.930480  369577 start.go:247] waiting for cluster config update ...
	I1201 20:09:31.930496  369577 start.go:256] writing updated cluster config ...
	I1201 20:09:31.930881  369577 ssh_runner.go:195] Run: rm -f paused
	I1201 20:09:31.982349  369577 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:09:31.984030  369577 out.go:179] * Done! kubectl is now configured to use "newest-cni-456990" cluster and "default" namespace by default
	W1201 20:09:29.388771  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:31.888825  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.852310234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.854603688Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6928cf06-a034-4161-855f-bd7d33a5de67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.857919534Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.858533398Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=30d6d20b-08fc-4a96-826c-0a6ae891ad49 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.859441408Z" level=info msg="Ran pod sandbox 2bfd83813e10e815ce0f893af0b08dea3945e3eb77a43b4f365207b592b21042 with infra container: kube-system/kindnet-gbbwm/POD" id=6928cf06-a034-4161-855f-bd7d33a5de67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.860462329Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.861271781Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=73e30363-f3da-4589-8f36-d18e48f304ad name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.861702757Z" level=info msg="Ran pod sandbox b4759f29545ab4628240182f1350d49ca5a4a71b7b4459e93c53a898425e7886 with infra container: kube-system/kube-proxy-gmbzw/POD" id=30d6d20b-08fc-4a96-826c-0a6ae891ad49 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.862746336Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d76e62b8-5c55-46eb-9ae7-dec9b93dfc6e name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.862770842Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3c49356a-63b7-410a-8640-bb9967c47060 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.863680585Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=8276df3c-c3bd-4cd2-8c1c-69252be82c5f name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.863945576Z" level=info msg="Creating container: kube-system/kindnet-gbbwm/kindnet-cni" id=48e33e4e-f06d-47f4-b89e-9c56ce59f592 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.864056799Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.864738552Z" level=info msg="Creating container: kube-system/kube-proxy-gmbzw/kube-proxy" id=4db22e27-333f-4543-8a84-2ae3cc9baf0d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.864864795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.870625962Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.871232587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.873526603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.874104466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.916068088Z" level=info msg="Created container 5ad9374ebdef598af3c0c43e3153f65b169b819e5fb4a9d886372581420d7cdb: kube-system/kindnet-gbbwm/kindnet-cni" id=48e33e4e-f06d-47f4-b89e-9c56ce59f592 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.916921798Z" level=info msg="Starting container: 5ad9374ebdef598af3c0c43e3153f65b169b819e5fb4a9d886372581420d7cdb" id=7f9953b0-42ee-4e69-bc46-96ba9996133c name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.918056644Z" level=info msg="Created container f7ebe7c114089b7fd98a77b02336b7b4a7229843afaee59054de20ddd4c39237: kube-system/kube-proxy-gmbzw/kube-proxy" id=4db22e27-333f-4543-8a84-2ae3cc9baf0d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.918647725Z" level=info msg="Starting container: f7ebe7c114089b7fd98a77b02336b7b4a7229843afaee59054de20ddd4c39237" id=3bbdf843-25cd-4b06-9b24-6e04a6b57cca name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.919007842Z" level=info msg="Started container" PID=1039 containerID=5ad9374ebdef598af3c0c43e3153f65b169b819e5fb4a9d886372581420d7cdb description=kube-system/kindnet-gbbwm/kindnet-cni id=7f9953b0-42ee-4e69-bc46-96ba9996133c name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bfd83813e10e815ce0f893af0b08dea3945e3eb77a43b4f365207b592b21042
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.922592015Z" level=info msg="Started container" PID=1040 containerID=f7ebe7c114089b7fd98a77b02336b7b4a7229843afaee59054de20ddd4c39237 description=kube-system/kube-proxy-gmbzw/kube-proxy id=3bbdf843-25cd-4b06-9b24-6e04a6b57cca name=/runtime.v1.RuntimeService/StartContainer sandboxID=b4759f29545ab4628240182f1350d49ca5a4a71b7b4459e93c53a898425e7886
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f7ebe7c114089       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   5 seconds ago       Running             kube-proxy                1                   b4759f29545ab       kube-proxy-gmbzw                            kube-system
	5ad9374ebdef5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   2bfd83813e10e       kindnet-gbbwm                               kube-system
	1417580c3497c       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   7 seconds ago       Running             kube-apiserver            1                   d075b3743e5b1       kube-apiserver-newest-cni-456990            kube-system
	daab845ade168       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   7 seconds ago       Running             etcd                      1                   8b7716a8e93d0       etcd-newest-cni-456990                      kube-system
	b6856377ff536       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   7 seconds ago       Running             kube-controller-manager   1                   120d86efa0197       kube-controller-manager-newest-cni-456990   kube-system
	392fe0a49d21c       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   7 seconds ago       Running             kube-scheduler            1                   d7fcc6be4fa14       kube-scheduler-newest-cni-456990            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-456990
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-456990
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=newest-cni-456990
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_09_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:09:01 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-456990
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:09:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:09:30 +0000   Mon, 01 Dec 2025 20:09:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:09:30 +0000   Mon, 01 Dec 2025 20:09:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:09:30 +0000   Mon, 01 Dec 2025 20:09:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 01 Dec 2025 20:09:30 +0000   Mon, 01 Dec 2025 20:09:00 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-456990
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                725bbd5a-64fb-4dec-99aa-76f4e9244e2a
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-456990                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-gbbwm                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-456990             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-456990    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-gmbzw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-456990             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node newest-cni-456990 event: Registered Node newest-cni-456990 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-456990 event: Registered Node newest-cni-456990 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [daab845ade1685423ec3afbbac2c0687b487468dbfe13e4dd079ef36264b1f9b] <==
	{"level":"warn","ts":"2025-12-01T20:09:29.804689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.811217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.823981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.831465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.838147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.844980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.851581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.865876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.873615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.880964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.889214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.897931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.904450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.911327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.918810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.927922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.935492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.943110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.950530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.957427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.972008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.981299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.990538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.999932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:30.053562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39476","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:09:36 up  1:52,  0 user,  load average: 3.49, 3.33, 2.39
	Linux newest-cni-456990 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5ad9374ebdef598af3c0c43e3153f65b169b819e5fb4a9d886372581420d7cdb] <==
	I1201 20:09:31.053704       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:09:31.053964       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1201 20:09:31.054096       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:09:31.054111       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:09:31.054136       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:09:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:09:31.346799       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:09:31.346846       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:09:31.346861       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:09:31.347026       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:09:31.548187       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:09:31.548210       1 metrics.go:72] Registering metrics
	I1201 20:09:31.548274       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [1417580c3497ca25a45487d338a1cc71b8a58465f823121b513347c144eee2f7] <==
	I1201 20:09:30.541210       1 aggregator.go:187] initial CRD sync complete...
	I1201 20:09:30.541225       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 20:09:30.541231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:09:30.541237       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:09:30.541342       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1201 20:09:30.542248       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1201 20:09:30.545958       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:30.548507       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1201 20:09:30.578166       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1201 20:09:30.589810       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:30.589903       1 policy_source.go:248] refreshing policies
	I1201 20:09:30.596172       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:09:30.678734       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:09:30.854375       1 controller.go:667] quota admission added evaluator for: namespaces
	I1201 20:09:30.888473       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:09:30.915933       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:09:30.923450       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:09:30.968918       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.9.233"}
	I1201 20:09:30.980426       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.174.85"}
	I1201 20:09:31.444314       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1201 20:09:34.111405       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:09:34.259933       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:09:34.259933       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:09:34.310229       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:09:34.360898       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [b6856377ff536b29c4763611ef889cb5638e3ac498d777d3699f2afd790303be] <==
	I1201 20:09:33.676465       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680346       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680389       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680402       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680409       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680416       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1201 20:09:33.680423       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1201 20:09:33.680408       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680470       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680633       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680694       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680805       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1201 20:09:33.680859       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680917       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680937       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-456990"
	I1201 20:09:33.680846       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680968       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.681312       1 range_allocator.go:177] "Sending events to api server"
	I1201 20:09:33.681019       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1201 20:09:33.681418       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1201 20:09:33.681469       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:09:33.681496       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680958       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.682034       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.770401       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [f7ebe7c114089b7fd98a77b02336b7b4a7229843afaee59054de20ddd4c39237] <==
	I1201 20:09:30.967835       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:09:31.027569       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:09:31.128033       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:31.128089       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1201 20:09:31.128180       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:09:31.146905       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:09:31.146969       1 server_linux.go:136] "Using iptables Proxier"
	I1201 20:09:31.152749       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:09:31.153152       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1201 20:09:31.153191       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:09:31.154636       1 config.go:309] "Starting node config controller"
	I1201 20:09:31.154657       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:09:31.154701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:09:31.154708       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:09:31.154733       1 config.go:200] "Starting service config controller"
	I1201 20:09:31.154740       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:09:31.154759       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:09:31.154764       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:09:31.255411       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:09:31.255427       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:09:31.255426       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 20:09:31.255455       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [392fe0a49d21c21c2e3ae9e47a9b34e7303e162785d520c5de9f5eaec0c8e13b] <==
	I1201 20:09:29.463506       1 serving.go:386] Generated self-signed cert in-memory
	W1201 20:09:30.456015       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1201 20:09:30.456061       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1201 20:09:30.456074       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1201 20:09:30.456098       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1201 20:09:30.504765       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1201 20:09:30.504801       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:09:30.507852       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:09:30.507963       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:09:30.508068       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 20:09:30.508191       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 20:09:30.608219       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: E1201 20:09:30.658535     662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-456990\" already exists" pod="kube-system/etcd-newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.658577     662 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: E1201 20:09:30.666469     662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-456990\" already exists" pod="kube-system/kube-apiserver-newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.666511     662 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.669448     662 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.669541     662 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.669574     662 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.670709     662 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: E1201 20:09:30.672609     662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-456990\" already exists" pod="kube-system/kube-controller-manager-newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.672637     662 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.675874     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b60069ca-4117-475a-9a2f-5ecd18fca600-xtables-lock\") pod \"kube-proxy-gmbzw\" (UID: \"b60069ca-4117-475a-9a2f-5ecd18fca600\") " pod="kube-system/kube-proxy-gmbzw"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.675918     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7386a806-e262-4de4-827f-fcc08a786840-lib-modules\") pod \"kindnet-gbbwm\" (UID: \"7386a806-e262-4de4-827f-fcc08a786840\") " pod="kube-system/kindnet-gbbwm"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.676145     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7386a806-e262-4de4-827f-fcc08a786840-cni-cfg\") pod \"kindnet-gbbwm\" (UID: \"7386a806-e262-4de4-827f-fcc08a786840\") " pod="kube-system/kindnet-gbbwm"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.676180     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7386a806-e262-4de4-827f-fcc08a786840-xtables-lock\") pod \"kindnet-gbbwm\" (UID: \"7386a806-e262-4de4-827f-fcc08a786840\") " pod="kube-system/kindnet-gbbwm"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.676206     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b60069ca-4117-475a-9a2f-5ecd18fca600-lib-modules\") pod \"kube-proxy-gmbzw\" (UID: \"b60069ca-4117-475a-9a2f-5ecd18fca600\") " pod="kube-system/kube-proxy-gmbzw"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: E1201 20:09:30.681353     662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-456990\" already exists" pod="kube-system/kube-scheduler-newest-cni-456990"
	Dec 01 20:09:31 newest-cni-456990 kubelet[662]: E1201 20:09:31.589600     662 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-456990" containerName="kube-scheduler"
	Dec 01 20:09:31 newest-cni-456990 kubelet[662]: E1201 20:09:31.589715     662 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-456990" containerName="etcd"
	Dec 01 20:09:31 newest-cni-456990 kubelet[662]: E1201 20:09:31.589920     662 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-456990" containerName="kube-apiserver"
	Dec 01 20:09:31 newest-cni-456990 kubelet[662]: E1201 20:09:31.590095     662 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-456990" containerName="kube-controller-manager"
	Dec 01 20:09:32 newest-cni-456990 kubelet[662]: E1201 20:09:32.591420     662 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-456990" containerName="etcd"
	Dec 01 20:09:33 newest-cni-456990 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 20:09:33 newest-cni-456990 kubelet[662]: I1201 20:09:33.030063     662 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 01 20:09:33 newest-cni-456990 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 20:09:33 newest-cni-456990 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-456990 -n newest-cni-456990
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-456990 -n newest-cni-456990: exit status 2 (332.69747ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-456990 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-6t6ld storage-provisioner dashboard-metrics-scraper-867fb5f87b-ql7j9 kubernetes-dashboard-b84665fb8-22lvz
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-456990 describe pod coredns-7d764666f9-6t6ld storage-provisioner dashboard-metrics-scraper-867fb5f87b-ql7j9 kubernetes-dashboard-b84665fb8-22lvz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-456990 describe pod coredns-7d764666f9-6t6ld storage-provisioner dashboard-metrics-scraper-867fb5f87b-ql7j9 kubernetes-dashboard-b84665fb8-22lvz: exit status 1 (58.124856ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-6t6ld" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-ql7j9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-22lvz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-456990 describe pod coredns-7d764666f9-6t6ld storage-provisioner dashboard-metrics-scraper-867fb5f87b-ql7j9 kubernetes-dashboard-b84665fb8-22lvz: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-456990
helpers_test.go:243: (dbg) docker inspect newest-cni-456990:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d",
	        "Created": "2025-12-01T20:08:39.724872977Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 369888,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:09:22.237732733Z",
	            "FinishedAt": "2025-12-01T20:09:21.310829486Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d/hosts",
	        "LogPath": "/var/lib/docker/containers/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d/9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d-json.log",
	        "Name": "/newest-cni-456990",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-456990:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-456990",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f5dab6a37e8a92a46af75a38ab0cc2e604011caee433ed7ba9d885d5362db7d",
	                "LowerDir": "/var/lib/docker/overlay2/d5d1301ce923a60ce00166554a60c8b9aae799f69167d17f72e4c243b476ffee-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5d1301ce923a60ce00166554a60c8b9aae799f69167d17f72e4c243b476ffee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5d1301ce923a60ce00166554a60c8b9aae799f69167d17f72e4c243b476ffee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5d1301ce923a60ce00166554a60c8b9aae799f69167d17f72e4c243b476ffee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-456990",
	                "Source": "/var/lib/docker/volumes/newest-cni-456990/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-456990",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-456990",
	                "name.minikube.sigs.k8s.io": "newest-cni-456990",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b7b5e63a31ef4f2ba9a078d92c6ba05082c73e4f3d50c5a95256ab6cb0a2219f",
	            "SandboxKey": "/var/run/docker/netns/b7b5e63a31ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-456990": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6836073b2b5aeb1e24b66aebffccfdf9f8c813eeb874cb4432e2209cabcc4ee5",
	                    "EndpointID": "41613a53958a056219bc33821700d40d6a79b767fc738fb568b8c43a02e2e035",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "62:21:e5:9b:9c:6d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-456990",
	                        "9f5dab6a37e8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-456990 -n newest-cni-456990
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-456990 -n newest-cni-456990: exit status 2 (307.977841ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-456990 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ image   │ old-k8s-version-217464 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ pause   │ -p old-k8s-version-217464 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-990820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ stop    │ -p default-k8s-diff-port-009682 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-009682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-456990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-456990 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ no-preload-240359 image list --format=json                                                                                                                                                                                                           │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p no-preload-240359 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-456990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p no-preload-240359                                                                                                                                                                                                                                 │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p no-preload-240359                                                                                                                                                                                                                                 │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ embed-certs-990820 image list --format=json                                                                                                                                                                                                          │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p embed-certs-990820 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ image   │ newest-cni-456990 image list --format=json                                                                                                                                                                                                           │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p newest-cni-456990 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ delete  │ -p embed-certs-990820                                                                                                                                                                                                                                │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p embed-certs-990820                                                                                                                                                                                                                                │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:09:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:09:21.981961  369577 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:09:21.982284  369577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:21.982309  369577 out.go:374] Setting ErrFile to fd 2...
	I1201 20:09:21.982317  369577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:21.982605  369577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:09:21.983126  369577 out.go:368] Setting JSON to false
	I1201 20:09:21.984534  369577 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6713,"bootTime":1764613049,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:09:21.984615  369577 start.go:143] virtualization: kvm guest
	I1201 20:09:21.986551  369577 out.go:179] * [newest-cni-456990] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:09:21.987815  369577 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:09:21.987822  369577 notify.go:221] Checking for updates...
	I1201 20:09:21.989035  369577 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:09:21.990281  369577 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:21.991469  369577 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:09:21.992827  369577 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:09:21.993968  369577 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:09:21.995635  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:21.996324  369577 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:09:22.023631  369577 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:09:22.023759  369577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:22.086345  369577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:09:22.076486449 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:22.086443  369577 docker.go:319] overlay module found
	I1201 20:09:22.088141  369577 out.go:179] * Using the docker driver based on existing profile
	I1201 20:09:22.089326  369577 start.go:309] selected driver: docker
	I1201 20:09:22.089342  369577 start.go:927] validating driver "docker" against &{Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:22.089433  369577 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:09:22.089938  369577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:22.149933  369577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:09:22.139611829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:22.150188  369577 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:22.150214  369577 cni.go:84] Creating CNI manager for ""
	I1201 20:09:22.150268  369577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:22.150340  369577 start.go:353] cluster config:
	{Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:22.151906  369577 out.go:179] * Starting "newest-cni-456990" primary control-plane node in "newest-cni-456990" cluster
	I1201 20:09:22.153186  369577 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:09:22.154362  369577 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:09:22.155412  369577 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:09:22.155527  369577 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1201 20:09:22.171714  369577 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1201 20:09:22.177942  369577 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:09:22.177960  369577 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 20:09:22.189038  369577 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1201 20:09:22.189216  369577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/config.json ...
	I1201 20:09:22.189326  369577 cache.go:107] acquiring lock: {Name:mkfb073f28c5d8c8d3d86356c45c70dd1e2004dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189338  369577 cache.go:107] acquiring lock: {Name:mkc92374151712b4806747490d187953ae21a58d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189371  369577 cache.go:107] acquiring lock: {Name:mk865bd5160866b82c3c4017851803598e1b929c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189422  369577 cache.go:107] acquiring lock: {Name:mk773ed33fa1e8ec1c4c0223e5734faea21632fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189430  369577 cache.go:107] acquiring lock: {Name:mk0738eccef6afbd5daf7149f54edabb749f37f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189489  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1201 20:09:22.189487  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 20:09:22.189498  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 20:09:22.189503  369577 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 136.335µs
	I1201 20:09:22.189503  369577 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 233.665µs
	I1201 20:09:22.189510  369577 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 188.139µs
	I1201 20:09:22.189518  369577 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 20:09:22.189519  369577 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189522  369577 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189439  369577 cache.go:107] acquiring lock: {Name:mk6b5845baaea000a530e17e97a93f47dfb76099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189532  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 20:09:22.189541  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 20:09:22.189501  369577 cache.go:107] acquiring lock: {Name:mk27bccd2c5069a28bfd06c5ca5926da3d72b508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189548  369577 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 129.513µs
	I1201 20:09:22.189552  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 20:09:22.189546  369577 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 174.868µs
	I1201 20:09:22.189560  369577 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 124.115µs
	I1201 20:09:22.189575  369577 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 20:09:22.189562  369577 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189565  369577 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189551  369577 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:09:22.189328  369577 cache.go:107] acquiring lock: {Name:mk11830a92dac1bd25dfa401c24a0b8c4cdadc55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189614  369577 start.go:360] acquireMachinesLock for newest-cni-456990: {Name:mk2627c40ed3bb60b8333e38b64846aaac23401d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189681  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 20:09:22.189693  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 20:09:22.189695  369577 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 374.309µs
	I1201 20:09:22.189705  369577 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 20:09:22.189706  369577 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 254.555µs
	I1201 20:09:22.189708  369577 start.go:364] duration metric: took 76.437µs to acquireMachinesLock for "newest-cni-456990"
	I1201 20:09:22.189717  369577 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 20:09:22.189725  369577 cache.go:87] Successfully saved all images to host disk.
	I1201 20:09:22.189750  369577 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:09:22.189762  369577 fix.go:54] fixHost starting: 
	I1201 20:09:22.190057  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:22.208529  369577 fix.go:112] recreateIfNeeded on newest-cni-456990: state=Stopped err=<nil>
	W1201 20:09:22.208577  369577 fix.go:138] unexpected machine state, will restart: <nil>
	W1201 20:09:19.888195  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:21.888394  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:22.210869  369577 out.go:252] * Restarting existing docker container for "newest-cni-456990" ...
	I1201 20:09:22.210940  369577 cli_runner.go:164] Run: docker start newest-cni-456990
	I1201 20:09:22.483881  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:22.503059  369577 kic.go:430] container "newest-cni-456990" state is running.
	I1201 20:09:22.503442  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:22.523479  369577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/config.json ...
	I1201 20:09:22.523677  369577 machine.go:94] provisionDockerMachine start ...
	I1201 20:09:22.523741  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:22.543913  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:22.544245  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:22.544267  369577 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:09:22.544844  369577 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47222->127.0.0.1:33138: read: connection reset by peer
	I1201 20:09:25.685375  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456990
	
	I1201 20:09:25.685403  369577 ubuntu.go:182] provisioning hostname "newest-cni-456990"
	I1201 20:09:25.685460  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:25.705542  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:25.705781  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:25.705803  369577 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456990 && echo "newest-cni-456990" | sudo tee /etc/hostname
	I1201 20:09:25.852705  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456990
	
	I1201 20:09:25.852773  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:25.871132  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:25.871412  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:25.871435  369577 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456990/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:09:26.010998  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:09:26.011023  369577 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:09:26.011049  369577 ubuntu.go:190] setting up certificates
	I1201 20:09:26.011060  369577 provision.go:84] configureAuth start
	I1201 20:09:26.011120  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:26.029504  369577 provision.go:143] copyHostCerts
	I1201 20:09:26.029554  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:09:26.029562  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:09:26.029637  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:09:26.029768  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:09:26.029778  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:09:26.029805  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:09:26.029875  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:09:26.029882  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:09:26.029905  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:09:26.029963  369577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456990 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-456990]
	I1201 20:09:26.328550  369577 provision.go:177] copyRemoteCerts
	I1201 20:09:26.328608  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:09:26.328639  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.347160  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:26.446331  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:09:26.464001  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:09:26.480946  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 20:09:26.497614  369577 provision.go:87] duration metric: took 486.54109ms to configureAuth
	I1201 20:09:26.497646  369577 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:09:26.497800  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:26.497887  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.515668  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:26.515898  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:26.515922  369577 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:09:26.810418  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:09:26.810446  369577 machine.go:97] duration metric: took 4.28675482s to provisionDockerMachine
	I1201 20:09:26.810460  369577 start.go:293] postStartSetup for "newest-cni-456990" (driver="docker")
	I1201 20:09:26.810476  369577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:09:26.810535  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:09:26.810578  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.830278  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:26.931436  369577 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:09:26.935157  369577 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:09:26.935188  369577 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:09:26.935201  369577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:09:26.935251  369577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:09:26.935381  369577 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:09:26.935506  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:09:26.944725  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:26.965060  369577 start.go:296] duration metric: took 154.584971ms for postStartSetup
	I1201 20:09:26.965147  369577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:09:26.965194  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	W1201 20:09:24.388422  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:26.888750  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:26.987515  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.084060  369577 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:09:27.088479  369577 fix.go:56] duration metric: took 4.898708724s for fixHost
	I1201 20:09:27.088506  369577 start.go:83] releasing machines lock for "newest-cni-456990", held for 4.898783939s
	I1201 20:09:27.088574  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:27.105855  369577 ssh_runner.go:195] Run: cat /version.json
	I1201 20:09:27.105902  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:27.105932  369577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:09:27.106000  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:27.126112  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.126915  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.222363  369577 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:27.278795  369577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:09:27.318224  369577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:09:27.323279  369577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:09:27.323360  369577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:09:27.331855  369577 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:09:27.331879  369577 start.go:496] detecting cgroup driver to use...
	I1201 20:09:27.331910  369577 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:09:27.331955  369577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:09:27.348474  369577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:09:27.362507  369577 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:09:27.362561  369577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:09:27.377474  369577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:09:27.389979  369577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:09:27.468376  369577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:09:27.547053  369577 docker.go:234] disabling docker service ...
	I1201 20:09:27.547113  369577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:09:27.561159  369577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:09:27.573365  369577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:09:27.653350  369577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:09:27.738303  369577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:09:27.751671  369577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:09:27.769449  369577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:09:27.769508  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.778583  369577 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:09:27.778652  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.787603  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.796800  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.805663  369577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:09:27.813756  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.822718  369577 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.831034  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.840425  369577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:09:27.847564  369577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:09:27.854787  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:27.944777  369577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:09:28.086649  369577 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:09:28.086709  369577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:09:28.090736  369577 start.go:564] Will wait 60s for crictl version
	I1201 20:09:28.090798  369577 ssh_runner.go:195] Run: which crictl
	I1201 20:09:28.094303  369577 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:09:28.118835  369577 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:09:28.118914  369577 ssh_runner.go:195] Run: crio --version
	I1201 20:09:28.145870  369577 ssh_runner.go:195] Run: crio --version
	I1201 20:09:28.174675  369577 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 20:09:28.175801  369577 cli_runner.go:164] Run: docker network inspect newest-cni-456990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:09:28.193466  369577 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1201 20:09:28.197584  369577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:28.209396  369577 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1201 20:09:28.210659  369577 kubeadm.go:884] updating cluster {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:09:28.210796  369577 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:09:28.210848  369577 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:09:28.241698  369577 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:09:28.241718  369577 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:09:28.241727  369577 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:09:28.241822  369577 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-456990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:09:28.241897  369577 ssh_runner.go:195] Run: crio config
	I1201 20:09:28.288940  369577 cni.go:84] Creating CNI manager for ""
	I1201 20:09:28.288962  369577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:28.288978  369577 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1201 20:09:28.289003  369577 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456990 NodeName:newest-cni-456990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:09:28.289139  369577 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-456990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:09:28.289213  369577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:09:28.297792  369577 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:09:28.297839  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:09:28.307851  369577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:09:28.324364  369577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:09:28.336458  369577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1201 20:09:28.348629  369577 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:09:28.351983  369577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:28.361836  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:28.448911  369577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:28.474045  369577 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990 for IP: 192.168.76.2
	I1201 20:09:28.474066  369577 certs.go:195] generating shared ca certs ...
	I1201 20:09:28.474085  369577 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:28.474246  369577 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:09:28.474327  369577 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:09:28.474342  369577 certs.go:257] generating profile certs ...
	I1201 20:09:28.474437  369577 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key
	I1201 20:09:28.474521  369577 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757
	I1201 20:09:28.474577  369577 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key
	I1201 20:09:28.474743  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:09:28.474794  369577 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:09:28.474809  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:09:28.474853  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:09:28.474889  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:09:28.474924  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:09:28.474982  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:28.475624  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:09:28.496424  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:09:28.515406  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:09:28.534645  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:09:28.557394  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:09:28.575824  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:09:28.592501  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:09:28.608549  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:09:28.624765  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:09:28.640559  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:09:28.657592  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:09:28.675267  369577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:09:28.686884  369577 ssh_runner.go:195] Run: openssl version
	I1201 20:09:28.692748  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:09:28.700669  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.704098  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.704138  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.737763  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:09:28.746239  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:09:28.754672  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.758325  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.758382  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.794154  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:09:28.802236  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:09:28.810900  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.814671  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.814728  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.849049  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:09:28.857127  369577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:09:28.860939  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:09:28.895833  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:09:28.930763  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:09:28.964635  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:09:29.008623  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:09:29.049534  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:09:29.099499  369577 kubeadm.go:401] StartCluster: {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:29.099618  369577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:09:29.099673  369577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:09:29.150581  369577 cri.go:89] found id: "1417580c3497ca25a45487d338a1cc71b8a58465f823121b513347c144eee2f7"
	I1201 20:09:29.150604  369577 cri.go:89] found id: "daab845ade1685423ec3afbbac2c0687b487468dbfe13e4dd079ef36264b1f9b"
	I1201 20:09:29.150609  369577 cri.go:89] found id: "b6856377ff536b29c4763611ef889cb5638e3ac498d777d3699f2afd790303be"
	I1201 20:09:29.150614  369577 cri.go:89] found id: "392fe0a49d21c21c2e3ae9e47a9b34e7303e162785d520c5de9f5eaec0c8e13b"
	I1201 20:09:29.150618  369577 cri.go:89] found id: ""
	I1201 20:09:29.150664  369577 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:09:29.164173  369577 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:29Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:29.164257  369577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:09:29.173942  369577 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:09:29.173960  369577 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:09:29.174005  369577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:09:29.183058  369577 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:09:29.184150  369577 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-456990" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:29.184912  369577 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-456990" cluster setting kubeconfig missing "newest-cni-456990" context setting]
	I1201 20:09:29.185982  369577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.188022  369577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:09:29.197072  369577 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1201 20:09:29.197113  369577 kubeadm.go:602] duration metric: took 23.134156ms to restartPrimaryControlPlane
	I1201 20:09:29.197123  369577 kubeadm.go:403] duration metric: took 97.633003ms to StartCluster
	I1201 20:09:29.197139  369577 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.197207  369577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:29.199443  369577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.199703  369577 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:09:29.199769  369577 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:09:29.199865  369577 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456990"
	I1201 20:09:29.199885  369577 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456990"
	W1201 20:09:29.199893  369577 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:09:29.199920  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.199928  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:29.199931  369577 addons.go:70] Setting dashboard=true in profile "newest-cni-456990"
	I1201 20:09:29.199951  369577 addons.go:239] Setting addon dashboard=true in "newest-cni-456990"
	W1201 20:09:29.199959  369577 addons.go:248] addon dashboard should already be in state true
	I1201 20:09:29.199970  369577 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456990"
	I1201 20:09:29.199984  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.199985  369577 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456990"
	I1201 20:09:29.200260  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.200479  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.200487  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.201913  369577 out.go:179] * Verifying Kubernetes components...
	I1201 20:09:29.203109  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:29.227872  369577 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:09:29.228002  369577 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:09:29.228898  369577 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456990"
	W1201 20:09:29.228919  369577 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:09:29.228944  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.229409  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.229522  369577 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:29.229537  369577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:09:29.229584  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.230745  369577 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:09:29.232822  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1201 20:09:29.232838  369577 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1201 20:09:29.232934  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.270464  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.270464  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.271089  369577 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:29.271109  369577 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:09:29.271168  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.299544  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.374473  369577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:29.393341  369577 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:09:29.393411  369577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:09:29.397957  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1201 20:09:29.397976  369577 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1201 20:09:29.401460  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:29.414861  369577 api_server.go:72] duration metric: took 215.119797ms to wait for apiserver process to appear ...
	I1201 20:09:29.414970  369577 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:09:29.415004  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:29.418380  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1201 20:09:29.418401  369577 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1201 20:09:29.422686  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:29.442227  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1201 20:09:29.442256  369577 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1201 20:09:29.462696  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1201 20:09:29.462720  369577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1201 20:09:29.488037  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1201 20:09:29.488054  369577 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1201 20:09:29.503571  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1201 20:09:29.503606  369577 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1201 20:09:29.520206  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1201 20:09:29.520228  369577 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1201 20:09:29.535881  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1201 20:09:29.535904  369577 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1201 20:09:29.552205  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:29.552229  369577 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1201 20:09:29.569173  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:30.447688  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:09:30.447714  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:09:30.447729  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:30.491568  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:09:30.491608  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:09:30.915119  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:30.920667  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:30.920698  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:31.073336  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.671812187s)
	I1201 20:09:31.073416  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.650688692s)
	I1201 20:09:31.073529  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.504317755s)
	I1201 20:09:31.074936  369577 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-456990 addons enable metrics-server
	
	I1201 20:09:31.086132  369577 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1201 20:09:31.087441  369577 addons.go:530] duration metric: took 1.88767322s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1201 20:09:31.415255  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:31.419239  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:31.419264  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:31.915470  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:31.920415  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1201 20:09:31.921522  369577 api_server.go:141] control plane version: v1.35.0-beta.0
	I1201 20:09:31.921546  369577 api_server.go:131] duration metric: took 2.506562046s to wait for apiserver health ...
	I1201 20:09:31.921555  369577 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:09:31.925533  369577 system_pods.go:59] 8 kube-system pods found
	I1201 20:09:31.925565  369577 system_pods.go:61] "coredns-7d764666f9-6t6ld" [f432ca97-c9f1-42a0-999c-c7b0c90658c1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:31.925575  369577 system_pods.go:61] "etcd-newest-cni-456990" [4ab9e88c-f019-49cb-b3b4-0ca5fe01e5bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:09:31.925588  369577 system_pods.go:61] "kindnet-gbbwm" [7386a806-e262-4de4-827f-fcc08a786840] Running
	I1201 20:09:31.925605  369577 system_pods.go:61] "kube-apiserver-newest-cni-456990" [f3b68723-7bb4-4725-9863-334f5bb8e2ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:09:31.925615  369577 system_pods.go:61] "kube-controller-manager-newest-cni-456990" [105b14f4-dc98-400c-b035-c01fff9181ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:09:31.925621  369577 system_pods.go:61] "kube-proxy-gmbzw" [b60069ca-4117-475a-9a2f-5ecd18fca600] Running
	I1201 20:09:31.925634  369577 system_pods.go:61] "kube-scheduler-newest-cni-456990" [d4eea582-e65e-440d-9d3e-05c34228b6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:09:31.925643  369577 system_pods.go:61] "storage-provisioner" [7a437438-9384-461e-9867-0fadcabcfea6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:31.925653  369577 system_pods.go:74] duration metric: took 4.093389ms to wait for pod list to return data ...
	I1201 20:09:31.925664  369577 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:09:31.928075  369577 default_sa.go:45] found service account: "default"
	I1201 20:09:31.928096  369577 default_sa.go:55] duration metric: took 2.423245ms for default service account to be created ...
	I1201 20:09:31.928110  369577 kubeadm.go:587] duration metric: took 2.728376297s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:31.928130  369577 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:09:31.930417  369577 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:09:31.930440  369577 node_conditions.go:123] node cpu capacity is 8
	I1201 20:09:31.930454  369577 node_conditions.go:105] duration metric: took 2.318192ms to run NodePressure ...
	I1201 20:09:31.930467  369577 start.go:242] waiting for startup goroutines ...
	I1201 20:09:31.930480  369577 start.go:247] waiting for cluster config update ...
	I1201 20:09:31.930496  369577 start.go:256] writing updated cluster config ...
	I1201 20:09:31.930881  369577 ssh_runner.go:195] Run: rm -f paused
	I1201 20:09:31.982349  369577 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:09:31.984030  369577 out.go:179] * Done! kubectl is now configured to use "newest-cni-456990" cluster and "default" namespace by default
	W1201 20:09:29.388771  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:31.888825  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.852310234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.854603688Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6928cf06-a034-4161-855f-bd7d33a5de67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.857919534Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.858533398Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=30d6d20b-08fc-4a96-826c-0a6ae891ad49 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.859441408Z" level=info msg="Ran pod sandbox 2bfd83813e10e815ce0f893af0b08dea3945e3eb77a43b4f365207b592b21042 with infra container: kube-system/kindnet-gbbwm/POD" id=6928cf06-a034-4161-855f-bd7d33a5de67 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.860462329Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.861271781Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=73e30363-f3da-4589-8f36-d18e48f304ad name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.861702757Z" level=info msg="Ran pod sandbox b4759f29545ab4628240182f1350d49ca5a4a71b7b4459e93c53a898425e7886 with infra container: kube-system/kube-proxy-gmbzw/POD" id=30d6d20b-08fc-4a96-826c-0a6ae891ad49 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.862746336Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d76e62b8-5c55-46eb-9ae7-dec9b93dfc6e name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.862770842Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3c49356a-63b7-410a-8640-bb9967c47060 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.863680585Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=8276df3c-c3bd-4cd2-8c1c-69252be82c5f name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.863945576Z" level=info msg="Creating container: kube-system/kindnet-gbbwm/kindnet-cni" id=48e33e4e-f06d-47f4-b89e-9c56ce59f592 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.864056799Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.864738552Z" level=info msg="Creating container: kube-system/kube-proxy-gmbzw/kube-proxy" id=4db22e27-333f-4543-8a84-2ae3cc9baf0d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.864864795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.870625962Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.871232587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.873526603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.874104466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.916068088Z" level=info msg="Created container 5ad9374ebdef598af3c0c43e3153f65b169b819e5fb4a9d886372581420d7cdb: kube-system/kindnet-gbbwm/kindnet-cni" id=48e33e4e-f06d-47f4-b89e-9c56ce59f592 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.916921798Z" level=info msg="Starting container: 5ad9374ebdef598af3c0c43e3153f65b169b819e5fb4a9d886372581420d7cdb" id=7f9953b0-42ee-4e69-bc46-96ba9996133c name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.918056644Z" level=info msg="Created container f7ebe7c114089b7fd98a77b02336b7b4a7229843afaee59054de20ddd4c39237: kube-system/kube-proxy-gmbzw/kube-proxy" id=4db22e27-333f-4543-8a84-2ae3cc9baf0d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.918647725Z" level=info msg="Starting container: f7ebe7c114089b7fd98a77b02336b7b4a7229843afaee59054de20ddd4c39237" id=3bbdf843-25cd-4b06-9b24-6e04a6b57cca name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.919007842Z" level=info msg="Started container" PID=1039 containerID=5ad9374ebdef598af3c0c43e3153f65b169b819e5fb4a9d886372581420d7cdb description=kube-system/kindnet-gbbwm/kindnet-cni id=7f9953b0-42ee-4e69-bc46-96ba9996133c name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bfd83813e10e815ce0f893af0b08dea3945e3eb77a43b4f365207b592b21042
	Dec 01 20:09:30 newest-cni-456990 crio[523]: time="2025-12-01T20:09:30.922592015Z" level=info msg="Started container" PID=1040 containerID=f7ebe7c114089b7fd98a77b02336b7b4a7229843afaee59054de20ddd4c39237 description=kube-system/kube-proxy-gmbzw/kube-proxy id=3bbdf843-25cd-4b06-9b24-6e04a6b57cca name=/runtime.v1.RuntimeService/StartContainer sandboxID=b4759f29545ab4628240182f1350d49ca5a4a71b7b4459e93c53a898425e7886
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f7ebe7c114089       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   6 seconds ago       Running             kube-proxy                1                   b4759f29545ab       kube-proxy-gmbzw                            kube-system
	5ad9374ebdef5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   2bfd83813e10e       kindnet-gbbwm                               kube-system
	1417580c3497c       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   8 seconds ago       Running             kube-apiserver            1                   d075b3743e5b1       kube-apiserver-newest-cni-456990            kube-system
	daab845ade168       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   8 seconds ago       Running             etcd                      1                   8b7716a8e93d0       etcd-newest-cni-456990                      kube-system
	b6856377ff536       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   8 seconds ago       Running             kube-controller-manager   1                   120d86efa0197       kube-controller-manager-newest-cni-456990   kube-system
	392fe0a49d21c       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   8 seconds ago       Running             kube-scheduler            1                   d7fcc6be4fa14       kube-scheduler-newest-cni-456990            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-456990
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-456990
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=newest-cni-456990
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_09_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:09:01 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-456990
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:09:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:09:30 +0000   Mon, 01 Dec 2025 20:09:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:09:30 +0000   Mon, 01 Dec 2025 20:09:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:09:30 +0000   Mon, 01 Dec 2025 20:09:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 01 Dec 2025 20:09:30 +0000   Mon, 01 Dec 2025 20:09:00 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-456990
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                725bbd5a-64fb-4dec-99aa-76f4e9244e2a
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-456990                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-gbbwm                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-456990             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-456990    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-gmbzw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-456990             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node newest-cni-456990 event: Registered Node newest-cni-456990 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-456990 event: Registered Node newest-cni-456990 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [daab845ade1685423ec3afbbac2c0687b487468dbfe13e4dd079ef36264b1f9b] <==
	{"level":"warn","ts":"2025-12-01T20:09:29.804689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.811217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.823981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.831465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.838147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.844980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.851581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.865876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.873615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.880964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.889214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.897931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.904450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.911327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.918810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.927922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.935492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.943110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.950530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.957427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.972008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.981299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.990538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:29.999932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:30.053562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39476","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:09:37 up  1:52,  0 user,  load average: 3.49, 3.33, 2.39
	Linux newest-cni-456990 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5ad9374ebdef598af3c0c43e3153f65b169b819e5fb4a9d886372581420d7cdb] <==
	I1201 20:09:31.053704       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:09:31.053964       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1201 20:09:31.054096       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:09:31.054111       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:09:31.054136       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:09:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:09:31.346799       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:09:31.346846       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:09:31.346861       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:09:31.347026       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:09:31.548187       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:09:31.548210       1 metrics.go:72] Registering metrics
	I1201 20:09:31.548274       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [1417580c3497ca25a45487d338a1cc71b8a58465f823121b513347c144eee2f7] <==
	I1201 20:09:30.541210       1 aggregator.go:187] initial CRD sync complete...
	I1201 20:09:30.541225       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 20:09:30.541231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:09:30.541237       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:09:30.541342       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1201 20:09:30.542248       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1201 20:09:30.545958       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:30.548507       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1201 20:09:30.578166       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1201 20:09:30.589810       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:30.589903       1 policy_source.go:248] refreshing policies
	I1201 20:09:30.596172       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:09:30.678734       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:09:30.854375       1 controller.go:667] quota admission added evaluator for: namespaces
	I1201 20:09:30.888473       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:09:30.915933       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:09:30.923450       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:09:30.968918       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.9.233"}
	I1201 20:09:30.980426       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.174.85"}
	I1201 20:09:31.444314       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1201 20:09:34.111405       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:09:34.259933       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:09:34.259933       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:09:34.310229       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:09:34.360898       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [b6856377ff536b29c4763611ef889cb5638e3ac498d777d3699f2afd790303be] <==
	I1201 20:09:33.676465       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680346       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680389       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680402       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680409       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680416       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1201 20:09:33.680423       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1201 20:09:33.680408       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680470       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680633       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680694       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680805       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1201 20:09:33.680859       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680917       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680937       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-456990"
	I1201 20:09:33.680846       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680968       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.681312       1 range_allocator.go:177] "Sending events to api server"
	I1201 20:09:33.681019       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1201 20:09:33.681418       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1201 20:09:33.681469       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:09:33.681496       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.680958       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.682034       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:33.770401       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [f7ebe7c114089b7fd98a77b02336b7b4a7229843afaee59054de20ddd4c39237] <==
	I1201 20:09:30.967835       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:09:31.027569       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:09:31.128033       1 shared_informer.go:377] "Caches are synced"
	I1201 20:09:31.128089       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1201 20:09:31.128180       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:09:31.146905       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:09:31.146969       1 server_linux.go:136] "Using iptables Proxier"
	I1201 20:09:31.152749       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:09:31.153152       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1201 20:09:31.153191       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:09:31.154636       1 config.go:309] "Starting node config controller"
	I1201 20:09:31.154657       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:09:31.154701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:09:31.154708       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:09:31.154733       1 config.go:200] "Starting service config controller"
	I1201 20:09:31.154740       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:09:31.154759       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:09:31.154764       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:09:31.255411       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:09:31.255427       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:09:31.255426       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 20:09:31.255455       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [392fe0a49d21c21c2e3ae9e47a9b34e7303e162785d520c5de9f5eaec0c8e13b] <==
	I1201 20:09:29.463506       1 serving.go:386] Generated self-signed cert in-memory
	W1201 20:09:30.456015       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1201 20:09:30.456061       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1201 20:09:30.456074       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1201 20:09:30.456098       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1201 20:09:30.504765       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1201 20:09:30.504801       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:09:30.507852       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:09:30.507963       1 shared_informer.go:370] "Waiting for caches to sync"
	I1201 20:09:30.508068       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 20:09:30.508191       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 20:09:30.608219       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: E1201 20:09:30.658535     662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-456990\" already exists" pod="kube-system/etcd-newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.658577     662 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: E1201 20:09:30.666469     662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-456990\" already exists" pod="kube-system/kube-apiserver-newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.666511     662 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.669448     662 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.669541     662 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.669574     662 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.670709     662 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: E1201 20:09:30.672609     662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-456990\" already exists" pod="kube-system/kube-controller-manager-newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.672637     662 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-456990"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.675874     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b60069ca-4117-475a-9a2f-5ecd18fca600-xtables-lock\") pod \"kube-proxy-gmbzw\" (UID: \"b60069ca-4117-475a-9a2f-5ecd18fca600\") " pod="kube-system/kube-proxy-gmbzw"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.675918     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7386a806-e262-4de4-827f-fcc08a786840-lib-modules\") pod \"kindnet-gbbwm\" (UID: \"7386a806-e262-4de4-827f-fcc08a786840\") " pod="kube-system/kindnet-gbbwm"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.676145     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7386a806-e262-4de4-827f-fcc08a786840-cni-cfg\") pod \"kindnet-gbbwm\" (UID: \"7386a806-e262-4de4-827f-fcc08a786840\") " pod="kube-system/kindnet-gbbwm"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.676180     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7386a806-e262-4de4-827f-fcc08a786840-xtables-lock\") pod \"kindnet-gbbwm\" (UID: \"7386a806-e262-4de4-827f-fcc08a786840\") " pod="kube-system/kindnet-gbbwm"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: I1201 20:09:30.676206     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b60069ca-4117-475a-9a2f-5ecd18fca600-lib-modules\") pod \"kube-proxy-gmbzw\" (UID: \"b60069ca-4117-475a-9a2f-5ecd18fca600\") " pod="kube-system/kube-proxy-gmbzw"
	Dec 01 20:09:30 newest-cni-456990 kubelet[662]: E1201 20:09:30.681353     662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-456990\" already exists" pod="kube-system/kube-scheduler-newest-cni-456990"
	Dec 01 20:09:31 newest-cni-456990 kubelet[662]: E1201 20:09:31.589600     662 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-456990" containerName="kube-scheduler"
	Dec 01 20:09:31 newest-cni-456990 kubelet[662]: E1201 20:09:31.589715     662 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-456990" containerName="etcd"
	Dec 01 20:09:31 newest-cni-456990 kubelet[662]: E1201 20:09:31.589920     662 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-456990" containerName="kube-apiserver"
	Dec 01 20:09:31 newest-cni-456990 kubelet[662]: E1201 20:09:31.590095     662 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-456990" containerName="kube-controller-manager"
	Dec 01 20:09:32 newest-cni-456990 kubelet[662]: E1201 20:09:32.591420     662 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-456990" containerName="etcd"
	Dec 01 20:09:33 newest-cni-456990 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 20:09:33 newest-cni-456990 kubelet[662]: I1201 20:09:33.030063     662 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 01 20:09:33 newest-cni-456990 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 20:09:33 newest-cni-456990 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-456990 -n newest-cni-456990
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-456990 -n newest-cni-456990: exit status 2 (323.631876ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-456990 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-6t6ld storage-provisioner dashboard-metrics-scraper-867fb5f87b-ql7j9 kubernetes-dashboard-b84665fb8-22lvz
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-456990 describe pod coredns-7d764666f9-6t6ld storage-provisioner dashboard-metrics-scraper-867fb5f87b-ql7j9 kubernetes-dashboard-b84665fb8-22lvz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-456990 describe pod coredns-7d764666f9-6t6ld storage-provisioner dashboard-metrics-scraper-867fb5f87b-ql7j9 kubernetes-dashboard-b84665fb8-22lvz: exit status 1 (58.967716ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-6t6ld" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-ql7j9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-22lvz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-456990 describe pod coredns-7d764666f9-6t6ld storage-provisioner dashboard-metrics-scraper-867fb5f87b-ql7j9 kubernetes-dashboard-b84665fb8-22lvz: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-009682 --alsologtostderr -v=1
E1201 20:09:59.428081   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/calico-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:09:59.434461   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/calico-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:09:59.445811   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/calico-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:09:59.467203   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/calico-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:09:59.508531   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/calico-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:09:59.589995   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/calico-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:09:59.751723   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/calico-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-009682 --alsologtostderr -v=1: exit status 80 (2.44271125s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-009682 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:09:57.531887  376564 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:09:57.532149  376564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:57.532159  376564 out.go:374] Setting ErrFile to fd 2...
	I1201 20:09:57.532163  376564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:57.532378  376564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:09:57.532606  376564 out.go:368] Setting JSON to false
	I1201 20:09:57.532622  376564 mustload.go:66] Loading cluster: default-k8s-diff-port-009682
	I1201 20:09:57.532971  376564 config.go:182] Loaded profile config "default-k8s-diff-port-009682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:09:57.533348  376564 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-009682 --format={{.State.Status}}
	I1201 20:09:57.551339  376564 host.go:66] Checking if "default-k8s-diff-port-009682" exists ...
	I1201 20:09:57.551589  376564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:57.610443  376564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-01 20:09:57.600982445 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:57.611012  376564 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764600683-21997/minikube-v1.37.0-1764600683-21997-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764600683-21997-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-009682 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1201 20:09:57.613901  376564 out.go:179] * Pausing node default-k8s-diff-port-009682 ... 
	I1201 20:09:57.615046  376564 host.go:66] Checking if "default-k8s-diff-port-009682" exists ...
	I1201 20:09:57.615274  376564 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:57.615362  376564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-009682
	I1201 20:09:57.632702  376564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/default-k8s-diff-port-009682/id_rsa Username:docker}
	I1201 20:09:57.729859  376564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:57.741466  376564 pause.go:52] kubelet running: true
	I1201 20:09:57.741519  376564 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:57.903902  376564 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:57.904002  376564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:57.970856  376564 cri.go:89] found id: "ea7770d9081d4f385b7e8df2d5352f5dab56d0549b0af8c406bf9f584f63e791"
	I1201 20:09:57.970876  376564 cri.go:89] found id: "2ae4eee24c0b716e3bb04fe195edb9f8f48409b1e41fc2315fe6778ea470078e"
	I1201 20:09:57.970880  376564 cri.go:89] found id: "12ce190d184fd2b48b686cd30aba07a65276dceda74d844c9c56396d7dfbd86a"
	I1201 20:09:57.970884  376564 cri.go:89] found id: "6b13013ae0b35c020548949e4bcb3099b0f4eff47e49c2cd079f0ce044863030"
	I1201 20:09:57.970887  376564 cri.go:89] found id: "3ddc74de106d8b1e6831a89821ee7f38d0e15ccfbc45495499f68e1e8d0c4728"
	I1201 20:09:57.970891  376564 cri.go:89] found id: "ef4ba8d77dd0e9071c7b175fb62f22f9aa86ca30b16bb6d7363c6dc686aac62e"
	I1201 20:09:57.970893  376564 cri.go:89] found id: "b15229721c1e0a47f1f11b128c387218e176a2618444bdeec996eb0d113098d4"
	I1201 20:09:57.970896  376564 cri.go:89] found id: "a1e60ba95082677ce609ab21f3eb49bcc9e9c4f2b4507d8317ccd30fb12c9a8d"
	I1201 20:09:57.970899  376564 cri.go:89] found id: "c037673fa52f79aa510971b202ef75f7b96fdef9c3fc063c32e8c7ef0d11996a"
	I1201 20:09:57.970904  376564 cri.go:89] found id: "feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad"
	I1201 20:09:57.970907  376564 cri.go:89] found id: "c9b4c204afd5c940c6070aab1b2e47561696de1a1705d5cc7e859b99dffa2266"
	I1201 20:09:57.970909  376564 cri.go:89] found id: ""
	I1201 20:09:57.970944  376564 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:57.982372  376564 retry.go:31] will retry after 167.354499ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:57Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:58.150864  376564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:58.179775  376564 pause.go:52] kubelet running: false
	I1201 20:09:58.179859  376564 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:58.315660  376564 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:58.315762  376564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:58.379709  376564 cri.go:89] found id: "ea7770d9081d4f385b7e8df2d5352f5dab56d0549b0af8c406bf9f584f63e791"
	I1201 20:09:58.379728  376564 cri.go:89] found id: "2ae4eee24c0b716e3bb04fe195edb9f8f48409b1e41fc2315fe6778ea470078e"
	I1201 20:09:58.379732  376564 cri.go:89] found id: "12ce190d184fd2b48b686cd30aba07a65276dceda74d844c9c56396d7dfbd86a"
	I1201 20:09:58.379736  376564 cri.go:89] found id: "6b13013ae0b35c020548949e4bcb3099b0f4eff47e49c2cd079f0ce044863030"
	I1201 20:09:58.379739  376564 cri.go:89] found id: "3ddc74de106d8b1e6831a89821ee7f38d0e15ccfbc45495499f68e1e8d0c4728"
	I1201 20:09:58.379743  376564 cri.go:89] found id: "ef4ba8d77dd0e9071c7b175fb62f22f9aa86ca30b16bb6d7363c6dc686aac62e"
	I1201 20:09:58.379746  376564 cri.go:89] found id: "b15229721c1e0a47f1f11b128c387218e176a2618444bdeec996eb0d113098d4"
	I1201 20:09:58.379748  376564 cri.go:89] found id: "a1e60ba95082677ce609ab21f3eb49bcc9e9c4f2b4507d8317ccd30fb12c9a8d"
	I1201 20:09:58.379751  376564 cri.go:89] found id: "c037673fa52f79aa510971b202ef75f7b96fdef9c3fc063c32e8c7ef0d11996a"
	I1201 20:09:58.379762  376564 cri.go:89] found id: "feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad"
	I1201 20:09:58.379765  376564 cri.go:89] found id: "c9b4c204afd5c940c6070aab1b2e47561696de1a1705d5cc7e859b99dffa2266"
	I1201 20:09:58.379768  376564 cri.go:89] found id: ""
	I1201 20:09:58.379802  376564 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:58.390838  376564 retry.go:31] will retry after 307.170544ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:58Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:58.698192  376564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:58.710613  376564 pause.go:52] kubelet running: false
	I1201 20:09:58.710677  376564 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:58.843163  376564 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:58.843238  376564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:58.906919  376564 cri.go:89] found id: "ea7770d9081d4f385b7e8df2d5352f5dab56d0549b0af8c406bf9f584f63e791"
	I1201 20:09:58.906942  376564 cri.go:89] found id: "2ae4eee24c0b716e3bb04fe195edb9f8f48409b1e41fc2315fe6778ea470078e"
	I1201 20:09:58.906948  376564 cri.go:89] found id: "12ce190d184fd2b48b686cd30aba07a65276dceda74d844c9c56396d7dfbd86a"
	I1201 20:09:58.906952  376564 cri.go:89] found id: "6b13013ae0b35c020548949e4bcb3099b0f4eff47e49c2cd079f0ce044863030"
	I1201 20:09:58.906955  376564 cri.go:89] found id: "3ddc74de106d8b1e6831a89821ee7f38d0e15ccfbc45495499f68e1e8d0c4728"
	I1201 20:09:58.906958  376564 cri.go:89] found id: "ef4ba8d77dd0e9071c7b175fb62f22f9aa86ca30b16bb6d7363c6dc686aac62e"
	I1201 20:09:58.906961  376564 cri.go:89] found id: "b15229721c1e0a47f1f11b128c387218e176a2618444bdeec996eb0d113098d4"
	I1201 20:09:58.906963  376564 cri.go:89] found id: "a1e60ba95082677ce609ab21f3eb49bcc9e9c4f2b4507d8317ccd30fb12c9a8d"
	I1201 20:09:58.906966  376564 cri.go:89] found id: "c037673fa52f79aa510971b202ef75f7b96fdef9c3fc063c32e8c7ef0d11996a"
	I1201 20:09:58.906985  376564 cri.go:89] found id: "feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad"
	I1201 20:09:58.906991  376564 cri.go:89] found id: "c9b4c204afd5c940c6070aab1b2e47561696de1a1705d5cc7e859b99dffa2266"
	I1201 20:09:58.906994  376564 cri.go:89] found id: ""
	I1201 20:09:58.907028  376564 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:58.918188  376564 retry.go:31] will retry after 762.199101ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:58Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:59.681107  376564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:09:59.693401  376564 pause.go:52] kubelet running: false
	I1201 20:09:59.693469  376564 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1201 20:09:59.829983  376564 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1201 20:09:59.830075  376564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1201 20:09:59.895675  376564 cri.go:89] found id: "ea7770d9081d4f385b7e8df2d5352f5dab56d0549b0af8c406bf9f584f63e791"
	I1201 20:09:59.895698  376564 cri.go:89] found id: "2ae4eee24c0b716e3bb04fe195edb9f8f48409b1e41fc2315fe6778ea470078e"
	I1201 20:09:59.895705  376564 cri.go:89] found id: "12ce190d184fd2b48b686cd30aba07a65276dceda74d844c9c56396d7dfbd86a"
	I1201 20:09:59.895709  376564 cri.go:89] found id: "6b13013ae0b35c020548949e4bcb3099b0f4eff47e49c2cd079f0ce044863030"
	I1201 20:09:59.895712  376564 cri.go:89] found id: "3ddc74de106d8b1e6831a89821ee7f38d0e15ccfbc45495499f68e1e8d0c4728"
	I1201 20:09:59.895716  376564 cri.go:89] found id: "ef4ba8d77dd0e9071c7b175fb62f22f9aa86ca30b16bb6d7363c6dc686aac62e"
	I1201 20:09:59.895720  376564 cri.go:89] found id: "b15229721c1e0a47f1f11b128c387218e176a2618444bdeec996eb0d113098d4"
	I1201 20:09:59.895723  376564 cri.go:89] found id: "a1e60ba95082677ce609ab21f3eb49bcc9e9c4f2b4507d8317ccd30fb12c9a8d"
	I1201 20:09:59.895726  376564 cri.go:89] found id: "c037673fa52f79aa510971b202ef75f7b96fdef9c3fc063c32e8c7ef0d11996a"
	I1201 20:09:59.895735  376564 cri.go:89] found id: "feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad"
	I1201 20:09:59.895779  376564 cri.go:89] found id: "c9b4c204afd5c940c6070aab1b2e47561696de1a1705d5cc7e859b99dffa2266"
	I1201 20:09:59.895793  376564 cri.go:89] found id: ""
	I1201 20:09:59.895845  376564 ssh_runner.go:195] Run: sudo runc list -f json
	I1201 20:09:59.909610  376564 out.go:203] 
	W1201 20:09:59.911273  376564 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1201 20:09:59.911312  376564 out.go:285] * 
	* 
	W1201 20:09:59.915378  376564 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 20:09:59.916625  376564 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-009682 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-009682
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-009682:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb",
	        "Created": "2025-12-01T20:07:49.041220039Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 363710,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:08:57.778214451Z",
	            "FinishedAt": "2025-12-01T20:08:55.541776699Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb/hosts",
	        "LogPath": "/var/lib/docker/containers/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb-json.log",
	        "Name": "/default-k8s-diff-port-009682",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-009682:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-009682",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb",
	                "LowerDir": "/var/lib/docker/overlay2/c672d918b52b353bd2ee1692962048a77130b297aef1e84b0420ee8646b9541b-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c672d918b52b353bd2ee1692962048a77130b297aef1e84b0420ee8646b9541b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c672d918b52b353bd2ee1692962048a77130b297aef1e84b0420ee8646b9541b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c672d918b52b353bd2ee1692962048a77130b297aef1e84b0420ee8646b9541b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-009682",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-009682/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-009682",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-009682",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-009682",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f4483c3efdd8543cd46a380f45ee8fa65d4ce890782763e1cf0beed7fa6c958c",
	            "SandboxKey": "/var/run/docker/netns/f4483c3efdd8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-009682": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ae21c1908b572396f83bd86ca68adf4c8b9646d28fbd4ac53d2a1a3af1c0eae4",
	                    "EndpointID": "3507bbdfdf98ace5dfe7d92a7f83990b94e8125e539a564786ede13fce636652",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "16:c0:c7:86:1c:68",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-009682",
	                        "0b0f250c2430"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682
E1201 20:10:00.074009   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/calico-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682: exit status 2 (315.901368ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-009682 logs -n 25
E1201 20:10:00.715426   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/calico-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-009682 logs -n 25: (1.05905212s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ stop    │ -p default-k8s-diff-port-009682 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-009682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p newest-cni-456990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-456990 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ no-preload-240359 image list --format=json                                                                                                                                                                                                           │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p no-preload-240359 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-456990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p no-preload-240359                                                                                                                                                                                                                                 │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p no-preload-240359                                                                                                                                                                                                                                 │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ embed-certs-990820 image list --format=json                                                                                                                                                                                                          │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p embed-certs-990820 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ image   │ newest-cni-456990 image list --format=json                                                                                                                                                                                                           │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p newest-cni-456990 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ delete  │ -p embed-certs-990820                                                                                                                                                                                                                                │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p embed-certs-990820                                                                                                                                                                                                                                │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p newest-cni-456990                                                                                                                                                                                                                                 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p newest-cni-456990                                                                                                                                                                                                                                 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ default-k8s-diff-port-009682 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p default-k8s-diff-port-009682 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:09:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:09:21.981961  369577 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:09:21.982284  369577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:21.982309  369577 out.go:374] Setting ErrFile to fd 2...
	I1201 20:09:21.982317  369577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:21.982605  369577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:09:21.983126  369577 out.go:368] Setting JSON to false
	I1201 20:09:21.984534  369577 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6713,"bootTime":1764613049,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:09:21.984615  369577 start.go:143] virtualization: kvm guest
	I1201 20:09:21.986551  369577 out.go:179] * [newest-cni-456990] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:09:21.987815  369577 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:09:21.987822  369577 notify.go:221] Checking for updates...
	I1201 20:09:21.989035  369577 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:09:21.990281  369577 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:21.991469  369577 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:09:21.992827  369577 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:09:21.993968  369577 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:09:21.995635  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:21.996324  369577 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:09:22.023631  369577 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:09:22.023759  369577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:22.086345  369577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:09:22.076486449 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:22.086443  369577 docker.go:319] overlay module found
	I1201 20:09:22.088141  369577 out.go:179] * Using the docker driver based on existing profile
	I1201 20:09:22.089326  369577 start.go:309] selected driver: docker
	I1201 20:09:22.089342  369577 start.go:927] validating driver "docker" against &{Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:22.089433  369577 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:09:22.089938  369577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:22.149933  369577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:09:22.139611829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:22.150188  369577 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:22.150214  369577 cni.go:84] Creating CNI manager for ""
	I1201 20:09:22.150268  369577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:22.150340  369577 start.go:353] cluster config:
	{Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:22.151906  369577 out.go:179] * Starting "newest-cni-456990" primary control-plane node in "newest-cni-456990" cluster
	I1201 20:09:22.153186  369577 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:09:22.154362  369577 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:09:22.155412  369577 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:09:22.155527  369577 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1201 20:09:22.171714  369577 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1201 20:09:22.177942  369577 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:09:22.177960  369577 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 20:09:22.189038  369577 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1201 20:09:22.189216  369577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/config.json ...
	I1201 20:09:22.189326  369577 cache.go:107] acquiring lock: {Name:mkfb073f28c5d8c8d3d86356c45c70dd1e2004dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189338  369577 cache.go:107] acquiring lock: {Name:mkc92374151712b4806747490d187953ae21a58d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189371  369577 cache.go:107] acquiring lock: {Name:mk865bd5160866b82c3c4017851803598e1b929c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189422  369577 cache.go:107] acquiring lock: {Name:mk773ed33fa1e8ec1c4c0223e5734faea21632fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189430  369577 cache.go:107] acquiring lock: {Name:mk0738eccef6afbd5daf7149f54edabb749f37f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189489  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1201 20:09:22.189487  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 20:09:22.189498  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 20:09:22.189503  369577 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 136.335µs
	I1201 20:09:22.189503  369577 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 233.665µs
	I1201 20:09:22.189510  369577 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 188.139µs
	I1201 20:09:22.189518  369577 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 20:09:22.189519  369577 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189522  369577 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189439  369577 cache.go:107] acquiring lock: {Name:mk6b5845baaea000a530e17e97a93f47dfb76099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189532  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 20:09:22.189541  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 20:09:22.189501  369577 cache.go:107] acquiring lock: {Name:mk27bccd2c5069a28bfd06c5ca5926da3d72b508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189548  369577 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 129.513µs
	I1201 20:09:22.189552  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 20:09:22.189546  369577 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 174.868µs
	I1201 20:09:22.189560  369577 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 124.115µs
	I1201 20:09:22.189575  369577 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 20:09:22.189562  369577 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189565  369577 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189551  369577 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:09:22.189328  369577 cache.go:107] acquiring lock: {Name:mk11830a92dac1bd25dfa401c24a0b8c4cdadc55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189614  369577 start.go:360] acquireMachinesLock for newest-cni-456990: {Name:mk2627c40ed3bb60b8333e38b64846aaac23401d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189681  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 20:09:22.189693  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 20:09:22.189695  369577 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 374.309µs
	I1201 20:09:22.189705  369577 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 20:09:22.189706  369577 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 254.555µs
	I1201 20:09:22.189708  369577 start.go:364] duration metric: took 76.437µs to acquireMachinesLock for "newest-cni-456990"
	I1201 20:09:22.189717  369577 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 20:09:22.189725  369577 cache.go:87] Successfully saved all images to host disk.
	I1201 20:09:22.189750  369577 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:09:22.189762  369577 fix.go:54] fixHost starting: 
	I1201 20:09:22.190057  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:22.208529  369577 fix.go:112] recreateIfNeeded on newest-cni-456990: state=Stopped err=<nil>
	W1201 20:09:22.208577  369577 fix.go:138] unexpected machine state, will restart: <nil>
	W1201 20:09:19.888195  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:21.888394  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:22.210869  369577 out.go:252] * Restarting existing docker container for "newest-cni-456990" ...
	I1201 20:09:22.210940  369577 cli_runner.go:164] Run: docker start newest-cni-456990
	I1201 20:09:22.483881  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:22.503059  369577 kic.go:430] container "newest-cni-456990" state is running.
	I1201 20:09:22.503442  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:22.523479  369577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/config.json ...
	I1201 20:09:22.523677  369577 machine.go:94] provisionDockerMachine start ...
	I1201 20:09:22.523741  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:22.543913  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:22.544245  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:22.544267  369577 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:09:22.544844  369577 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47222->127.0.0.1:33138: read: connection reset by peer
	I1201 20:09:25.685375  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456990
	
	I1201 20:09:25.685403  369577 ubuntu.go:182] provisioning hostname "newest-cni-456990"
	I1201 20:09:25.685460  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:25.705542  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:25.705781  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:25.705803  369577 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456990 && echo "newest-cni-456990" | sudo tee /etc/hostname
	I1201 20:09:25.852705  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456990
	
	I1201 20:09:25.852773  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:25.871132  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:25.871412  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:25.871435  369577 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456990/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:09:26.010998  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:09:26.011023  369577 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:09:26.011049  369577 ubuntu.go:190] setting up certificates
	I1201 20:09:26.011060  369577 provision.go:84] configureAuth start
	I1201 20:09:26.011120  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:26.029504  369577 provision.go:143] copyHostCerts
	I1201 20:09:26.029554  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:09:26.029562  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:09:26.029637  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:09:26.029768  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:09:26.029778  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:09:26.029805  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:09:26.029875  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:09:26.029882  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:09:26.029905  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:09:26.029963  369577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456990 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-456990]
	I1201 20:09:26.328550  369577 provision.go:177] copyRemoteCerts
	I1201 20:09:26.328608  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:09:26.328639  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.347160  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:26.446331  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:09:26.464001  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:09:26.480946  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 20:09:26.497614  369577 provision.go:87] duration metric: took 486.54109ms to configureAuth
	I1201 20:09:26.497646  369577 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:09:26.497800  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:26.497887  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.515668  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:26.515898  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:26.515922  369577 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:09:26.810418  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:09:26.810446  369577 machine.go:97] duration metric: took 4.28675482s to provisionDockerMachine
	I1201 20:09:26.810460  369577 start.go:293] postStartSetup for "newest-cni-456990" (driver="docker")
	I1201 20:09:26.810476  369577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:09:26.810535  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:09:26.810578  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.830278  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:26.931436  369577 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:09:26.935157  369577 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:09:26.935188  369577 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:09:26.935201  369577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:09:26.935251  369577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:09:26.935381  369577 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:09:26.935506  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:09:26.944725  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:26.965060  369577 start.go:296] duration metric: took 154.584971ms for postStartSetup
	I1201 20:09:26.965147  369577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:09:26.965194  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	W1201 20:09:24.388422  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:26.888750  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:26.987515  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.084060  369577 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:09:27.088479  369577 fix.go:56] duration metric: took 4.898708724s for fixHost
	I1201 20:09:27.088506  369577 start.go:83] releasing machines lock for "newest-cni-456990", held for 4.898783939s
	I1201 20:09:27.088574  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:27.105855  369577 ssh_runner.go:195] Run: cat /version.json
	I1201 20:09:27.105902  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:27.105932  369577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:09:27.106000  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:27.126112  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.126915  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.222363  369577 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:27.278795  369577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:09:27.318224  369577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:09:27.323279  369577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:09:27.323360  369577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:09:27.331855  369577 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:09:27.331879  369577 start.go:496] detecting cgroup driver to use...
	I1201 20:09:27.331910  369577 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:09:27.331955  369577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:09:27.348474  369577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:09:27.362507  369577 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:09:27.362561  369577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:09:27.377474  369577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:09:27.389979  369577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:09:27.468376  369577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:09:27.547053  369577 docker.go:234] disabling docker service ...
	I1201 20:09:27.547113  369577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:09:27.561159  369577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:09:27.573365  369577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:09:27.653350  369577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:09:27.738303  369577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:09:27.751671  369577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:09:27.769449  369577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:09:27.769508  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.778583  369577 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:09:27.778652  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.787603  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.796800  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.805663  369577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:09:27.813756  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.822718  369577 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.831034  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.840425  369577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:09:27.847564  369577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:09:27.854787  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:27.944777  369577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:09:28.086649  369577 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:09:28.086709  369577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:09:28.090736  369577 start.go:564] Will wait 60s for crictl version
	I1201 20:09:28.090798  369577 ssh_runner.go:195] Run: which crictl
	I1201 20:09:28.094303  369577 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:09:28.118835  369577 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:09:28.118914  369577 ssh_runner.go:195] Run: crio --version
	I1201 20:09:28.145870  369577 ssh_runner.go:195] Run: crio --version
	I1201 20:09:28.174675  369577 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 20:09:28.175801  369577 cli_runner.go:164] Run: docker network inspect newest-cni-456990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:09:28.193466  369577 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1201 20:09:28.197584  369577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:28.209396  369577 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1201 20:09:28.210659  369577 kubeadm.go:884] updating cluster {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:09:28.210796  369577 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:09:28.210848  369577 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:09:28.241698  369577 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:09:28.241718  369577 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:09:28.241727  369577 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:09:28.241822  369577 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-456990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:09:28.241897  369577 ssh_runner.go:195] Run: crio config
	I1201 20:09:28.288940  369577 cni.go:84] Creating CNI manager for ""
	I1201 20:09:28.288962  369577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:28.288978  369577 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1201 20:09:28.289003  369577 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456990 NodeName:newest-cni-456990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:09:28.289139  369577 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-456990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:09:28.289213  369577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:09:28.297792  369577 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:09:28.297839  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:09:28.307851  369577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:09:28.324364  369577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:09:28.336458  369577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1201 20:09:28.348629  369577 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:09:28.351983  369577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:28.361836  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:28.448911  369577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:28.474045  369577 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990 for IP: 192.168.76.2
	I1201 20:09:28.474066  369577 certs.go:195] generating shared ca certs ...
	I1201 20:09:28.474085  369577 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:28.474246  369577 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:09:28.474327  369577 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:09:28.474342  369577 certs.go:257] generating profile certs ...
	I1201 20:09:28.474437  369577 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key
	I1201 20:09:28.474521  369577 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757
	I1201 20:09:28.474577  369577 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key
	I1201 20:09:28.474743  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:09:28.474794  369577 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:09:28.474809  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:09:28.474853  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:09:28.474889  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:09:28.474924  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:09:28.474982  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:28.475624  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:09:28.496424  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:09:28.515406  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:09:28.534645  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:09:28.557394  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:09:28.575824  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:09:28.592501  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:09:28.608549  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:09:28.624765  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:09:28.640559  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:09:28.657592  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:09:28.675267  369577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:09:28.686884  369577 ssh_runner.go:195] Run: openssl version
	I1201 20:09:28.692748  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:09:28.700669  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.704098  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.704138  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.737763  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:09:28.746239  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:09:28.754672  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.758325  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.758382  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.794154  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:09:28.802236  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:09:28.810900  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.814671  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.814728  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.849049  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:09:28.857127  369577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:09:28.860939  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:09:28.895833  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:09:28.930763  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:09:28.964635  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:09:29.008623  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:09:29.049534  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:09:29.099499  369577 kubeadm.go:401] StartCluster: {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:29.099618  369577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:09:29.099673  369577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:09:29.150581  369577 cri.go:89] found id: "1417580c3497ca25a45487d338a1cc71b8a58465f823121b513347c144eee2f7"
	I1201 20:09:29.150604  369577 cri.go:89] found id: "daab845ade1685423ec3afbbac2c0687b487468dbfe13e4dd079ef36264b1f9b"
	I1201 20:09:29.150609  369577 cri.go:89] found id: "b6856377ff536b29c4763611ef889cb5638e3ac498d777d3699f2afd790303be"
	I1201 20:09:29.150614  369577 cri.go:89] found id: "392fe0a49d21c21c2e3ae9e47a9b34e7303e162785d520c5de9f5eaec0c8e13b"
	I1201 20:09:29.150618  369577 cri.go:89] found id: ""
	I1201 20:09:29.150664  369577 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:09:29.164173  369577 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:29Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:29.164257  369577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:09:29.173942  369577 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:09:29.173960  369577 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:09:29.174005  369577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:09:29.183058  369577 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:09:29.184150  369577 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-456990" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:29.184912  369577 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-456990" cluster setting kubeconfig missing "newest-cni-456990" context setting]
	I1201 20:09:29.185982  369577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.188022  369577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:09:29.197072  369577 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1201 20:09:29.197113  369577 kubeadm.go:602] duration metric: took 23.134156ms to restartPrimaryControlPlane
	I1201 20:09:29.197123  369577 kubeadm.go:403] duration metric: took 97.633003ms to StartCluster
	I1201 20:09:29.197139  369577 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.197207  369577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:29.199443  369577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.199703  369577 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:09:29.199769  369577 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:09:29.199865  369577 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456990"
	I1201 20:09:29.199885  369577 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456990"
	W1201 20:09:29.199893  369577 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:09:29.199920  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.199928  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:29.199931  369577 addons.go:70] Setting dashboard=true in profile "newest-cni-456990"
	I1201 20:09:29.199951  369577 addons.go:239] Setting addon dashboard=true in "newest-cni-456990"
	W1201 20:09:29.199959  369577 addons.go:248] addon dashboard should already be in state true
	I1201 20:09:29.199970  369577 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456990"
	I1201 20:09:29.199984  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.199985  369577 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456990"
	I1201 20:09:29.200260  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.200479  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.200487  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.201913  369577 out.go:179] * Verifying Kubernetes components...
	I1201 20:09:29.203109  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:29.227872  369577 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:09:29.228002  369577 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:09:29.228898  369577 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456990"
	W1201 20:09:29.228919  369577 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:09:29.228944  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.229409  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.229522  369577 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:29.229537  369577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:09:29.229584  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.230745  369577 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:09:29.232822  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1201 20:09:29.232838  369577 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1201 20:09:29.232934  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.270464  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.270464  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.271089  369577 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:29.271109  369577 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:09:29.271168  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.299544  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.374473  369577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:29.393341  369577 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:09:29.393411  369577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:09:29.397957  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1201 20:09:29.397976  369577 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1201 20:09:29.401460  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:29.414861  369577 api_server.go:72] duration metric: took 215.119797ms to wait for apiserver process to appear ...
	I1201 20:09:29.414970  369577 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:09:29.415004  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:29.418380  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1201 20:09:29.418401  369577 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1201 20:09:29.422686  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:29.442227  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1201 20:09:29.442256  369577 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1201 20:09:29.462696  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1201 20:09:29.462720  369577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1201 20:09:29.488037  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1201 20:09:29.488054  369577 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1201 20:09:29.503571  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1201 20:09:29.503606  369577 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1201 20:09:29.520206  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1201 20:09:29.520228  369577 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1201 20:09:29.535881  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1201 20:09:29.535904  369577 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1201 20:09:29.552205  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:29.552229  369577 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1201 20:09:29.569173  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:30.447688  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:09:30.447714  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:09:30.447729  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:30.491568  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:09:30.491608  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:09:30.915119  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:30.920667  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:30.920698  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:31.073336  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.671812187s)
	I1201 20:09:31.073416  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.650688692s)
	I1201 20:09:31.073529  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.504317755s)
	I1201 20:09:31.074936  369577 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-456990 addons enable metrics-server
	
	I1201 20:09:31.086132  369577 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1201 20:09:31.087441  369577 addons.go:530] duration metric: took 1.88767322s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1201 20:09:31.415255  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:31.419239  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:31.419264  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:31.915470  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:31.920415  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1201 20:09:31.921522  369577 api_server.go:141] control plane version: v1.35.0-beta.0
	I1201 20:09:31.921546  369577 api_server.go:131] duration metric: took 2.506562046s to wait for apiserver health ...
	I1201 20:09:31.921555  369577 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:09:31.925533  369577 system_pods.go:59] 8 kube-system pods found
	I1201 20:09:31.925565  369577 system_pods.go:61] "coredns-7d764666f9-6t6ld" [f432ca97-c9f1-42a0-999c-c7b0c90658c1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:31.925575  369577 system_pods.go:61] "etcd-newest-cni-456990" [4ab9e88c-f019-49cb-b3b4-0ca5fe01e5bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:09:31.925588  369577 system_pods.go:61] "kindnet-gbbwm" [7386a806-e262-4de4-827f-fcc08a786840] Running
	I1201 20:09:31.925605  369577 system_pods.go:61] "kube-apiserver-newest-cni-456990" [f3b68723-7bb4-4725-9863-334f5bb8e2ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:09:31.925615  369577 system_pods.go:61] "kube-controller-manager-newest-cni-456990" [105b14f4-dc98-400c-b035-c01fff9181ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:09:31.925621  369577 system_pods.go:61] "kube-proxy-gmbzw" [b60069ca-4117-475a-9a2f-5ecd18fca600] Running
	I1201 20:09:31.925634  369577 system_pods.go:61] "kube-scheduler-newest-cni-456990" [d4eea582-e65e-440d-9d3e-05c34228b6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:09:31.925643  369577 system_pods.go:61] "storage-provisioner" [7a437438-9384-461e-9867-0fadcabcfea6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:31.925653  369577 system_pods.go:74] duration metric: took 4.093389ms to wait for pod list to return data ...
	I1201 20:09:31.925664  369577 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:09:31.928075  369577 default_sa.go:45] found service account: "default"
	I1201 20:09:31.928096  369577 default_sa.go:55] duration metric: took 2.423245ms for default service account to be created ...
	I1201 20:09:31.928110  369577 kubeadm.go:587] duration metric: took 2.728376297s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:31.928130  369577 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:09:31.930417  369577 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:09:31.930440  369577 node_conditions.go:123] node cpu capacity is 8
	I1201 20:09:31.930454  369577 node_conditions.go:105] duration metric: took 2.318192ms to run NodePressure ...
	I1201 20:09:31.930467  369577 start.go:242] waiting for startup goroutines ...
	I1201 20:09:31.930480  369577 start.go:247] waiting for cluster config update ...
	I1201 20:09:31.930496  369577 start.go:256] writing updated cluster config ...
	I1201 20:09:31.930881  369577 ssh_runner.go:195] Run: rm -f paused
	I1201 20:09:31.982349  369577 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:09:31.984030  369577 out.go:179] * Done! kubectl is now configured to use "newest-cni-456990" cluster and "default" namespace by default
	W1201 20:09:29.388771  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:31.888825  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:34.387216  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:36.387620  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:38.887396  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:40.887546  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:42.887792  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:44.387970  363421 pod_ready.go:94] pod "coredns-66bc5c9577-hf646" is "Ready"
	I1201 20:09:44.387998  363421 pod_ready.go:86] duration metric: took 36.005947971s for pod "coredns-66bc5c9577-hf646" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.390521  363421 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.394349  363421 pod_ready.go:94] pod "etcd-default-k8s-diff-port-009682" is "Ready"
	I1201 20:09:44.394376  363421 pod_ready.go:86] duration metric: took 3.831228ms for pod "etcd-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.396320  363421 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.402040  363421 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-009682" is "Ready"
	I1201 20:09:44.402063  363421 pod_ready.go:86] duration metric: took 5.717196ms for pod "kube-apiserver-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.403774  363421 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.586644  363421 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-009682" is "Ready"
	I1201 20:09:44.586669  363421 pod_ready.go:86] duration metric: took 182.875387ms for pod "kube-controller-manager-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.786881  363421 pod_ready.go:83] waiting for pod "kube-proxy-fjn7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:45.186372  363421 pod_ready.go:94] pod "kube-proxy-fjn7h" is "Ready"
	I1201 20:09:45.186399  363421 pod_ready.go:86] duration metric: took 399.491533ms for pod "kube-proxy-fjn7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:45.386608  363421 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:45.786541  363421 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-009682" is "Ready"
	I1201 20:09:45.786569  363421 pod_ready.go:86] duration metric: took 399.93667ms for pod "kube-scheduler-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:45.786585  363421 pod_ready.go:40] duration metric: took 37.407704581s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:09:45.828303  363421 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 20:09:45.830061  363421 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-009682" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 01 20:09:18 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:18.132611776Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 01 20:09:18 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:18.135925561Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 20:09:18 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:18.135952817Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.275189721Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4fd3fdb9-4f4d-406b-8b9b-3fb7062f8e98 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.276268569Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=26434a0b-c41a-4a8c-bbaa-e0f5bfb8a28a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.277435957Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl/dashboard-metrics-scraper" id=8c5faeab-5e6b-456e-93c5-bca616647297 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.277579775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.283655481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.284210784Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.311692635Z" level=info msg="Created container feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl/dashboard-metrics-scraper" id=8c5faeab-5e6b-456e-93c5-bca616647297 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.312324797Z" level=info msg="Starting container: feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad" id=7a4034ba-108a-44af-a16c-488b1bbc23b0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.314563071Z" level=info msg="Started container" PID=1765 containerID=feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl/dashboard-metrics-scraper id=7a4034ba-108a-44af-a16c-488b1bbc23b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=37efaefe205f98fe873b541457424cb2628d086adfb38212fd7ba6aa4d161e07
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.392275597Z" level=info msg="Removing container: c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74" id=da91aeb5-62f8-4e00-9dd5-a3ae0ac86219 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.40634904Z" level=info msg="Removed container c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl/dashboard-metrics-scraper" id=da91aeb5-62f8-4e00-9dd5-a3ae0ac86219 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.40227559Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=19e1c516-5704-4b64-ab3a-63b132657058 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.403144403Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0a34d482-6836-463b-8af2-da2814967fad name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.404195655Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a0885c5f-fda4-4496-bc61-4ea13b43583f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.404360055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.408673639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.408807684Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/12e65bfc4b0e58e884e9a4b4bf044268a9d8bdaa44cf746673865592d9405b62/merged/etc/passwd: no such file or directory"
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.408829168Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/12e65bfc4b0e58e884e9a4b4bf044268a9d8bdaa44cf746673865592d9405b62/merged/etc/group: no such file or directory"
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.409949796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.436750757Z" level=info msg="Created container ea7770d9081d4f385b7e8df2d5352f5dab56d0549b0af8c406bf9f584f63e791: kube-system/storage-provisioner/storage-provisioner" id=a0885c5f-fda4-4496-bc61-4ea13b43583f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.437379293Z" level=info msg="Starting container: ea7770d9081d4f385b7e8df2d5352f5dab56d0549b0af8c406bf9f584f63e791" id=f4a4739f-3ba4-43be-a05f-afbd7142cda7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.439516486Z" level=info msg="Started container" PID=1779 containerID=ea7770d9081d4f385b7e8df2d5352f5dab56d0549b0af8c406bf9f584f63e791 description=kube-system/storage-provisioner/storage-provisioner id=f4a4739f-3ba4-43be-a05f-afbd7142cda7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bc9bd1252e530e477cb7d89c311ff48537ea04a23d6a7e4031a30b7c51aa80b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	ea7770d9081d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   1bc9bd1252e53       storage-provisioner                                    kube-system
	feda1b78f267e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   37efaefe205f9       dashboard-metrics-scraper-6ffb444bf9-g9xdl             kubernetes-dashboard
	c9b4c204afd5c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   5b4044f34aeed       kubernetes-dashboard-855c9754f9-s6hvn                  kubernetes-dashboard
	2ae4eee24c0b7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   f12f354f99343       coredns-66bc5c9577-hf646                               kube-system
	c3d03f6faa71d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   9c4b5f2125399       busybox                                                default
	12ce190d184fd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   9389a79841665       kindnet-pqt6x                                          kube-system
	6b13013ae0b35       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   1bc9bd1252e53       storage-provisioner                                    kube-system
	3ddc74de106d8       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           53 seconds ago      Running             kube-proxy                  0                   98e7bcc384682       kube-proxy-fjn7h                                       kube-system
	ef4ba8d77dd0e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   d81787591073e       etcd-default-k8s-diff-port-009682                      kube-system
	b15229721c1e0       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           56 seconds ago      Running             kube-controller-manager     0                   5cc98afc5c828       kube-controller-manager-default-k8s-diff-port-009682   kube-system
	a1e60ba950826       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           56 seconds ago      Running             kube-scheduler              0                   4be37620627d3       kube-scheduler-default-k8s-diff-port-009682            kube-system
	c037673fa52f7       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           56 seconds ago      Running             kube-apiserver              0                   213be4f0a0151       kube-apiserver-default-k8s-diff-port-009682            kube-system
	
	
	==> coredns [2ae4eee24c0b716e3bb04fe195edb9f8f48409b1e41fc2315fe6778ea470078e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37527 - 9075 "HINFO IN 2514835802269368415.1494518536876060678. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019698164s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-009682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-009682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=default-k8s-diff-port-009682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_08_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:08:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-009682
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:09:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:09:37 +0000   Mon, 01 Dec 2025 20:08:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:09:37 +0000   Mon, 01 Dec 2025 20:08:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:09:37 +0000   Mon, 01 Dec 2025 20:08:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:09:37 +0000   Mon, 01 Dec 2025 20:08:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-009682
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                05b6424d-f307-4593-b87d-4cd8ab421755
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-hf646                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-009682                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-pqt6x                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-default-k8s-diff-port-009682             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-009682    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-fjn7h                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-default-k8s-diff-port-009682             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-g9xdl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-s6hvn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node default-k8s-diff-port-009682 event: Registered Node default-k8s-diff-port-009682 in Controller
	  Normal  NodeReady                95s                kubelet          Node default-k8s-diff-port-009682 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node default-k8s-diff-port-009682 event: Registered Node default-k8s-diff-port-009682 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [ef4ba8d77dd0e9071c7b175fb62f22f9aa86ca30b16bb6d7363c6dc686aac62e] <==
	{"level":"warn","ts":"2025-12-01T20:09:06.116736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.123940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.130194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.146576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.152789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.159462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.166472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.174230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.180501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.191007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.199449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.207072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.213852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.220764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.227225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.243665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.251547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.258393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.265100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.273365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.280547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.293761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.301118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.308030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.355283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44436","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:10:01 up  1:52,  0 user,  load average: 2.90, 3.21, 2.37
	Linux default-k8s-diff-port-009682 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [12ce190d184fd2b48b686cd30aba07a65276dceda74d844c9c56396d7dfbd86a] <==
	I1201 20:09:07.817403       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:09:07.817678       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1201 20:09:07.817905       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:09:07.817930       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:09:07.817953       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:09:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:09:08.115999       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:09:08.116091       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:09:08.116111       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:09:08.116536       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:09:08.516234       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:09:08.516268       1 metrics.go:72] Registering metrics
	I1201 20:09:08.516356       1 controller.go:711] "Syncing nftables rules"
	I1201 20:09:18.116785       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1201 20:09:18.116839       1 main.go:301] handling current node
	I1201 20:09:28.120377       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1201 20:09:28.120411       1 main.go:301] handling current node
	I1201 20:09:38.116703       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1201 20:09:38.116752       1 main.go:301] handling current node
	I1201 20:09:48.116981       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1201 20:09:48.117015       1 main.go:301] handling current node
	I1201 20:09:58.116502       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1201 20:09:58.116566       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c037673fa52f79aa510971b202ef75f7b96fdef9c3fc063c32e8c7ef0d11996a] <==
	I1201 20:09:06.845356       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1201 20:09:06.845394       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1201 20:09:06.845479       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1201 20:09:06.845517       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1201 20:09:06.845526       1 aggregator.go:171] initial CRD sync complete...
	I1201 20:09:06.845559       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 20:09:06.845641       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:09:06.845668       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:09:06.845870       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1201 20:09:06.851524       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1201 20:09:06.853358       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1201 20:09:06.862827       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:09:06.879438       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:09:07.115992       1 controller.go:667] quota admission added evaluator for: namespaces
	I1201 20:09:07.145634       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:09:07.162727       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:09:07.169839       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:09:07.176546       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:09:07.209895       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.19.230"}
	I1201 20:09:07.219215       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.102.78"}
	I1201 20:09:07.747641       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:09:10.213847       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:09:10.665429       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:09:10.715658       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:09:10.715658       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b15229721c1e0a47f1f11b128c387218e176a2618444bdeec996eb0d113098d4] <==
	I1201 20:09:10.128789       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:09:10.130919       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1201 20:09:10.133219       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1201 20:09:10.153563       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1201 20:09:10.154815       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1201 20:09:10.158847       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1201 20:09:10.160046       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1201 20:09:10.160181       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1201 20:09:10.160351       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1201 20:09:10.160418       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1201 20:09:10.160427       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1201 20:09:10.160439       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1201 20:09:10.160507       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1201 20:09:10.160526       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1201 20:09:10.160540       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:09:10.160548       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1201 20:09:10.160555       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1201 20:09:10.165266       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:09:10.172517       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1201 20:09:10.172570       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 20:09:10.172629       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 20:09:10.172637       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 20:09:10.172644       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 20:09:10.176771       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1201 20:09:10.186092       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3ddc74de106d8b1e6831a89821ee7f38d0e15ccfbc45495499f68e1e8d0c4728] <==
	I1201 20:09:07.675188       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:09:07.769628       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 20:09:07.869896       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 20:09:07.869948       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1201 20:09:07.870049       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:09:07.891898       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:09:07.891960       1 server_linux.go:132] "Using iptables Proxier"
	I1201 20:09:07.898263       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:09:07.898650       1 server.go:527] "Version info" version="v1.34.2"
	I1201 20:09:07.898675       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:09:07.902444       1 config.go:200] "Starting service config controller"
	I1201 20:09:07.902522       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:09:07.902658       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:09:07.902990       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:09:07.902947       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:09:07.903453       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:09:07.903133       1 config.go:309] "Starting node config controller"
	I1201 20:09:07.903478       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:09:07.903484       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:09:08.002732       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:09:08.003570       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 20:09:08.003589       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a1e60ba95082677ce609ab21f3eb49bcc9e9c4f2b4507d8317ccd30fb12c9a8d] <==
	I1201 20:09:05.620226       1 serving.go:386] Generated self-signed cert in-memory
	W1201 20:09:06.767620       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1201 20:09:06.767726       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1201 20:09:06.767742       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1201 20:09:06.767751       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1201 20:09:06.804397       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1201 20:09:06.804501       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:09:06.808107       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 20:09:06.808233       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 20:09:06.810035       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:09:06.810093       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:09:06.911418       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 20:09:10 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:10.899647     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf49b\" (UniqueName: \"kubernetes.io/projected/c8483a44-6cc7-4129-95e3-734c4b95302a-kube-api-access-kf49b\") pod \"dashboard-metrics-scraper-6ffb444bf9-g9xdl\" (UID: \"c8483a44-6cc7-4129-95e3-734c4b95302a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl"
	Dec 01 20:09:10 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:10.899683     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwfn6\" (UniqueName: \"kubernetes.io/projected/cc861f8f-612e-438c-af44-6b614122609d-kube-api-access-hwfn6\") pod \"kubernetes-dashboard-855c9754f9-s6hvn\" (UID: \"cc861f8f-612e-438c-af44-6b614122609d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s6hvn"
	Dec 01 20:09:14 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:14.118270     732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 01 20:09:14 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:14.331188     732 scope.go:117] "RemoveContainer" containerID="1419dd4c2a0a3234a50c87f48f3aacb4e29fe775a48a09f97fa69d747d19ac7c"
	Dec 01 20:09:15 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:15.335535     732 scope.go:117] "RemoveContainer" containerID="1419dd4c2a0a3234a50c87f48f3aacb4e29fe775a48a09f97fa69d747d19ac7c"
	Dec 01 20:09:15 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:15.335682     732 scope.go:117] "RemoveContainer" containerID="c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74"
	Dec 01 20:09:15 default-k8s-diff-port-009682 kubelet[732]: E1201 20:09:15.335881     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g9xdl_kubernetes-dashboard(c8483a44-6cc7-4129-95e3-734c4b95302a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl" podUID="c8483a44-6cc7-4129-95e3-734c4b95302a"
	Dec 01 20:09:16 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:16.339875     732 scope.go:117] "RemoveContainer" containerID="c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74"
	Dec 01 20:09:16 default-k8s-diff-port-009682 kubelet[732]: E1201 20:09:16.340082     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g9xdl_kubernetes-dashboard(c8483a44-6cc7-4129-95e3-734c4b95302a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl" podUID="c8483a44-6cc7-4129-95e3-734c4b95302a"
	Dec 01 20:09:18 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:18.356604     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s6hvn" podStartSLOduration=1.691257458 podStartE2EDuration="8.356577831s" podCreationTimestamp="2025-12-01 20:09:10 +0000 UTC" firstStartedPulling="2025-12-01 20:09:11.124706848 +0000 UTC m=+6.965257035" lastFinishedPulling="2025-12-01 20:09:17.790027226 +0000 UTC m=+13.630577408" observedRunningTime="2025-12-01 20:09:18.356175361 +0000 UTC m=+14.196725565" watchObservedRunningTime="2025-12-01 20:09:18.356577831 +0000 UTC m=+14.197128033"
	Dec 01 20:09:22 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:22.137563     732 scope.go:117] "RemoveContainer" containerID="c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74"
	Dec 01 20:09:22 default-k8s-diff-port-009682 kubelet[732]: E1201 20:09:22.137740     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g9xdl_kubernetes-dashboard(c8483a44-6cc7-4129-95e3-734c4b95302a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl" podUID="c8483a44-6cc7-4129-95e3-734c4b95302a"
	Dec 01 20:09:35 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:35.274658     732 scope.go:117] "RemoveContainer" containerID="c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74"
	Dec 01 20:09:35 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:35.390961     732 scope.go:117] "RemoveContainer" containerID="c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74"
	Dec 01 20:09:35 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:35.391187     732 scope.go:117] "RemoveContainer" containerID="feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad"
	Dec 01 20:09:35 default-k8s-diff-port-009682 kubelet[732]: E1201 20:09:35.391426     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g9xdl_kubernetes-dashboard(c8483a44-6cc7-4129-95e3-734c4b95302a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl" podUID="c8483a44-6cc7-4129-95e3-734c4b95302a"
	Dec 01 20:09:38 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:38.401916     732 scope.go:117] "RemoveContainer" containerID="6b13013ae0b35c020548949e4bcb3099b0f4eff47e49c2cd079f0ce044863030"
	Dec 01 20:09:42 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:42.136700     732 scope.go:117] "RemoveContainer" containerID="feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad"
	Dec 01 20:09:42 default-k8s-diff-port-009682 kubelet[732]: E1201 20:09:42.137011     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g9xdl_kubernetes-dashboard(c8483a44-6cc7-4129-95e3-734c4b95302a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl" podUID="c8483a44-6cc7-4129-95e3-734c4b95302a"
	Dec 01 20:09:55 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:55.274873     732 scope.go:117] "RemoveContainer" containerID="feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad"
	Dec 01 20:09:55 default-k8s-diff-port-009682 kubelet[732]: E1201 20:09:55.275047     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g9xdl_kubernetes-dashboard(c8483a44-6cc7-4129-95e3-734c4b95302a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl" podUID="c8483a44-6cc7-4129-95e3-734c4b95302a"
	Dec 01 20:09:57 default-k8s-diff-port-009682 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 20:09:57 default-k8s-diff-port-009682 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 20:09:57 default-k8s-diff-port-009682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 20:09:57 default-k8s-diff-port-009682 systemd[1]: kubelet.service: Consumed 1.684s CPU time.
	
	
	==> kubernetes-dashboard [c9b4c204afd5c940c6070aab1b2e47561696de1a1705d5cc7e859b99dffa2266] <==
	2025/12/01 20:09:17 Starting overwatch
	2025/12/01 20:09:17 Using namespace: kubernetes-dashboard
	2025/12/01 20:09:17 Using in-cluster config to connect to apiserver
	2025/12/01 20:09:17 Using secret token for csrf signing
	2025/12/01 20:09:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/01 20:09:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/01 20:09:17 Successful initial request to the apiserver, version: v1.34.2
	2025/12/01 20:09:17 Generating JWE encryption key
	2025/12/01 20:09:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/01 20:09:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/01 20:09:18 Initializing JWE encryption key from synchronized object
	2025/12/01 20:09:18 Creating in-cluster Sidecar client
	2025/12/01 20:09:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/01 20:09:18 Serving insecurely on HTTP port: 9090
	2025/12/01 20:09:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6b13013ae0b35c020548949e4bcb3099b0f4eff47e49c2cd079f0ce044863030] <==
	I1201 20:09:07.642372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1201 20:09:37.645487       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ea7770d9081d4f385b7e8df2d5352f5dab56d0549b0af8c406bf9f584f63e791] <==
	I1201 20:09:38.452698       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1201 20:09:38.460256       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1201 20:09:38.460337       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1201 20:09:38.462713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:41.917607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:46.177898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:49.776195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:52.830080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:55.852145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:55.856751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:09:55.856921       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1201 20:09:55.857057       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-009682_6b281026-9672-4268-ab3e-c9ef7cacc91f!
	I1201 20:09:55.857058       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40bc02c5-697a-4268-94f8-e188e6079112", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-009682_6b281026-9672-4268-ab3e-c9ef7cacc91f became leader
	W1201 20:09:55.858929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:55.862636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:09:55.957334       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-009682_6b281026-9672-4268-ab3e-c9ef7cacc91f!
	W1201 20:09:57.865585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:57.871057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:59.875143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:59.879488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682: exit status 2 (317.530272ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-009682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-009682
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-009682:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb",
	        "Created": "2025-12-01T20:07:49.041220039Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 363710,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-01T20:08:57.778214451Z",
	            "FinishedAt": "2025-12-01T20:08:55.541776699Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb/hosts",
	        "LogPath": "/var/lib/docker/containers/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb/0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb-json.log",
	        "Name": "/default-k8s-diff-port-009682",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-009682:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-009682",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b0f250c2430ed793268d49fe6f01681a987a13ec368e389ba330eeac30226fb",
	                "LowerDir": "/var/lib/docker/overlay2/c672d918b52b353bd2ee1692962048a77130b297aef1e84b0420ee8646b9541b-init/diff:/var/lib/docker/overlay2/5c1c6028a5e886decc3e2fbbfb80eb2603292e75ea541093475167b29a2083b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c672d918b52b353bd2ee1692962048a77130b297aef1e84b0420ee8646b9541b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c672d918b52b353bd2ee1692962048a77130b297aef1e84b0420ee8646b9541b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c672d918b52b353bd2ee1692962048a77130b297aef1e84b0420ee8646b9541b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-009682",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-009682/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-009682",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-009682",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-009682",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f4483c3efdd8543cd46a380f45ee8fa65d4ce890782763e1cf0beed7fa6c958c",
	            "SandboxKey": "/var/run/docker/netns/f4483c3efdd8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-009682": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ae21c1908b572396f83bd86ca68adf4c8b9646d28fbd4ac53d2a1a3af1c0eae4",
	                    "EndpointID": "3507bbdfdf98ace5dfe7d92a7f83990b94e8125e539a564786ede13fce636652",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "16:c0:c7:86:1c:68",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-009682",
	                        "0b0f250c2430"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682
E1201 20:10:01.996834   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/calico-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682: exit status 2 (322.610208ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-009682 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-009682 logs -n 25: (1.053180881s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-009682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │                     │
	│ delete  │ -p old-k8s-version-217464                                                                                                                                                                                                                            │ old-k8s-version-217464       │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ stop    │ -p default-k8s-diff-port-009682 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-009682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:08 UTC │
	│ start   │ -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:08 UTC │ 01 Dec 25 20:09 UTC │
	│ addons  │ enable metrics-server -p newest-cni-456990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-456990 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ no-preload-240359 image list --format=json                                                                                                                                                                                                           │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p no-preload-240359 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-456990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ start   │ -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p no-preload-240359                                                                                                                                                                                                                                 │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p no-preload-240359                                                                                                                                                                                                                                 │ no-preload-240359            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ embed-certs-990820 image list --format=json                                                                                                                                                                                                          │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p embed-certs-990820 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ image   │ newest-cni-456990 image list --format=json                                                                                                                                                                                                           │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p newest-cni-456990 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	│ delete  │ -p embed-certs-990820                                                                                                                                                                                                                                │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p embed-certs-990820                                                                                                                                                                                                                                │ embed-certs-990820           │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p newest-cni-456990                                                                                                                                                                                                                                 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ delete  │ -p newest-cni-456990                                                                                                                                                                                                                                 │ newest-cni-456990            │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ image   │ default-k8s-diff-port-009682 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │ 01 Dec 25 20:09 UTC │
	│ pause   │ -p default-k8s-diff-port-009682 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-009682 │ jenkins │ v1.37.0 │ 01 Dec 25 20:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:09:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:09:21.981961  369577 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:09:21.982284  369577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:21.982309  369577 out.go:374] Setting ErrFile to fd 2...
	I1201 20:09:21.982317  369577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:21.982605  369577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:09:21.983126  369577 out.go:368] Setting JSON to false
	I1201 20:09:21.984534  369577 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6713,"bootTime":1764613049,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:09:21.984615  369577 start.go:143] virtualization: kvm guest
	I1201 20:09:21.986551  369577 out.go:179] * [newest-cni-456990] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:09:21.987815  369577 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:09:21.987822  369577 notify.go:221] Checking for updates...
	I1201 20:09:21.989035  369577 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:09:21.990281  369577 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:21.991469  369577 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:09:21.992827  369577 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:09:21.993968  369577 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:09:21.995635  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:21.996324  369577 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:09:22.023631  369577 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:09:22.023759  369577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:22.086345  369577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:09:22.076486449 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:22.086443  369577 docker.go:319] overlay module found
	I1201 20:09:22.088141  369577 out.go:179] * Using the docker driver based on existing profile
	I1201 20:09:22.089326  369577 start.go:309] selected driver: docker
	I1201 20:09:22.089342  369577 start.go:927] validating driver "docker" against &{Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:22.089433  369577 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:09:22.089938  369577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:09:22.149933  369577 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:09:22.139611829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:09:22.150188  369577 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:22.150214  369577 cni.go:84] Creating CNI manager for ""
	I1201 20:09:22.150268  369577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:22.150340  369577 start.go:353] cluster config:
	{Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:22.151906  369577 out.go:179] * Starting "newest-cni-456990" primary control-plane node in "newest-cni-456990" cluster
	I1201 20:09:22.153186  369577 cache.go:134] Beginning downloading kic base image for docker with crio
	I1201 20:09:22.154362  369577 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1201 20:09:22.155412  369577 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:09:22.155527  369577 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	W1201 20:09:22.171714  369577 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1201 20:09:22.177942  369577 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1201 20:09:22.177960  369577 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1201 20:09:22.189038  369577 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1201 20:09:22.189216  369577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/config.json ...
	I1201 20:09:22.189326  369577 cache.go:107] acquiring lock: {Name:mkfb073f28c5d8c8d3d86356c45c70dd1e2004dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189338  369577 cache.go:107] acquiring lock: {Name:mkc92374151712b4806747490d187953ae21a58d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189371  369577 cache.go:107] acquiring lock: {Name:mk865bd5160866b82c3c4017851803598e1b929c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189422  369577 cache.go:107] acquiring lock: {Name:mk773ed33fa1e8ec1c4c0223e5734faea21632fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189430  369577 cache.go:107] acquiring lock: {Name:mk0738eccef6afbd5daf7149f54edabb749f37f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189489  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1201 20:09:22.189487  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 20:09:22.189498  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 20:09:22.189503  369577 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 136.335µs
	I1201 20:09:22.189503  369577 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 233.665µs
	I1201 20:09:22.189510  369577 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 188.139µs
	I1201 20:09:22.189518  369577 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 20:09:22.189519  369577 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189522  369577 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189439  369577 cache.go:107] acquiring lock: {Name:mk6b5845baaea000a530e17e97a93f47dfb76099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189532  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 20:09:22.189541  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 20:09:22.189501  369577 cache.go:107] acquiring lock: {Name:mk27bccd2c5069a28bfd06c5ca5926da3d72b508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189548  369577 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 129.513µs
	I1201 20:09:22.189552  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 20:09:22.189546  369577 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 174.868µs
	I1201 20:09:22.189560  369577 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 124.115µs
	I1201 20:09:22.189575  369577 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 20:09:22.189562  369577 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189565  369577 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 20:09:22.189551  369577 cache.go:243] Successfully downloaded all kic artifacts
	I1201 20:09:22.189328  369577 cache.go:107] acquiring lock: {Name:mk11830a92dac1bd25dfa401c24a0b8c4cdadc55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189614  369577 start.go:360] acquireMachinesLock for newest-cni-456990: {Name:mk2627c40ed3bb60b8333e38b64846aaac23401d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:09:22.189681  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 20:09:22.189693  369577 cache.go:115] /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 20:09:22.189695  369577 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 374.309µs
	I1201 20:09:22.189705  369577 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 20:09:22.189706  369577 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 254.555µs
	I1201 20:09:22.189708  369577 start.go:364] duration metric: took 76.437µs to acquireMachinesLock for "newest-cni-456990"
	I1201 20:09:22.189717  369577 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-13091/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 20:09:22.189725  369577 cache.go:87] Successfully saved all images to host disk.
	I1201 20:09:22.189750  369577 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:09:22.189762  369577 fix.go:54] fixHost starting: 
	I1201 20:09:22.190057  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:22.208529  369577 fix.go:112] recreateIfNeeded on newest-cni-456990: state=Stopped err=<nil>
	W1201 20:09:22.208577  369577 fix.go:138] unexpected machine state, will restart: <nil>
	W1201 20:09:19.888195  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:21.888394  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:22.210869  369577 out.go:252] * Restarting existing docker container for "newest-cni-456990" ...
	I1201 20:09:22.210940  369577 cli_runner.go:164] Run: docker start newest-cni-456990
	I1201 20:09:22.483881  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:22.503059  369577 kic.go:430] container "newest-cni-456990" state is running.
	I1201 20:09:22.503442  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:22.523479  369577 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/config.json ...
	I1201 20:09:22.523677  369577 machine.go:94] provisionDockerMachine start ...
	I1201 20:09:22.523741  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:22.543913  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:22.544245  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:22.544267  369577 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:09:22.544844  369577 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47222->127.0.0.1:33138: read: connection reset by peer
	I1201 20:09:25.685375  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456990
	
	I1201 20:09:25.685403  369577 ubuntu.go:182] provisioning hostname "newest-cni-456990"
	I1201 20:09:25.685460  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:25.705542  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:25.705781  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:25.705803  369577 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-456990 && echo "newest-cni-456990" | sudo tee /etc/hostname
	I1201 20:09:25.852705  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-456990
	
	I1201 20:09:25.852773  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:25.871132  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:25.871412  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:25.871435  369577 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-456990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-456990/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-456990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:09:26.010998  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:09:26.011023  369577 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-13091/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-13091/.minikube}
	I1201 20:09:26.011049  369577 ubuntu.go:190] setting up certificates
	I1201 20:09:26.011060  369577 provision.go:84] configureAuth start
	I1201 20:09:26.011120  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:26.029504  369577 provision.go:143] copyHostCerts
	I1201 20:09:26.029554  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem, removing ...
	I1201 20:09:26.029562  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem
	I1201 20:09:26.029637  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/ca.pem (1082 bytes)
	I1201 20:09:26.029768  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem, removing ...
	I1201 20:09:26.029778  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem
	I1201 20:09:26.029805  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/cert.pem (1123 bytes)
	I1201 20:09:26.029875  369577 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem, removing ...
	I1201 20:09:26.029882  369577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem
	I1201 20:09:26.029905  369577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-13091/.minikube/key.pem (1675 bytes)
	I1201 20:09:26.029963  369577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem org=jenkins.newest-cni-456990 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-456990]
	I1201 20:09:26.328550  369577 provision.go:177] copyRemoteCerts
	I1201 20:09:26.328608  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:09:26.328639  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.347160  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:26.446331  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:09:26.464001  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1201 20:09:26.480946  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1201 20:09:26.497614  369577 provision.go:87] duration metric: took 486.54109ms to configureAuth
	I1201 20:09:26.497646  369577 ubuntu.go:206] setting minikube options for container-runtime
	I1201 20:09:26.497800  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:26.497887  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.515668  369577 main.go:143] libmachine: Using SSH client type: native
	I1201 20:09:26.515898  369577 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1201 20:09:26.515922  369577 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:09:26.810418  369577 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:09:26.810446  369577 machine.go:97] duration metric: took 4.28675482s to provisionDockerMachine
	I1201 20:09:26.810460  369577 start.go:293] postStartSetup for "newest-cni-456990" (driver="docker")
	I1201 20:09:26.810476  369577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:09:26.810535  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:09:26.810578  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:26.830278  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:26.931436  369577 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:09:26.935157  369577 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1201 20:09:26.935188  369577 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1201 20:09:26.935201  369577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/addons for local assets ...
	I1201 20:09:26.935251  369577 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-13091/.minikube/files for local assets ...
	I1201 20:09:26.935381  369577 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem -> 168732.pem in /etc/ssl/certs
	I1201 20:09:26.935506  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:09:26.944725  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:26.965060  369577 start.go:296] duration metric: took 154.584971ms for postStartSetup
	I1201 20:09:26.965147  369577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 20:09:26.965194  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	W1201 20:09:24.388422  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:26.888750  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:26.987515  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.084060  369577 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1201 20:09:27.088479  369577 fix.go:56] duration metric: took 4.898708724s for fixHost
	I1201 20:09:27.088506  369577 start.go:83] releasing machines lock for "newest-cni-456990", held for 4.898783939s
	I1201 20:09:27.088574  369577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-456990
	I1201 20:09:27.105855  369577 ssh_runner.go:195] Run: cat /version.json
	I1201 20:09:27.105902  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:27.105932  369577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:09:27.106000  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:27.126112  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.126915  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:27.222363  369577 ssh_runner.go:195] Run: systemctl --version
	I1201 20:09:27.278795  369577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:09:27.318224  369577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:09:27.323279  369577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:09:27.323360  369577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:09:27.331855  369577 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:09:27.331879  369577 start.go:496] detecting cgroup driver to use...
	I1201 20:09:27.331910  369577 detect.go:190] detected "systemd" cgroup driver on host os
	I1201 20:09:27.331955  369577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:09:27.348474  369577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:09:27.362507  369577 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:09:27.362561  369577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:09:27.377474  369577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:09:27.389979  369577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:09:27.468376  369577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:09:27.547053  369577 docker.go:234] disabling docker service ...
	I1201 20:09:27.547113  369577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:09:27.561159  369577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:09:27.573365  369577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:09:27.653350  369577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:09:27.738303  369577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:09:27.751671  369577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:09:27.769449  369577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:09:27.769508  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.778583  369577 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1201 20:09:27.778652  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.787603  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.796800  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.805663  369577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:09:27.813756  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.822718  369577 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.831034  369577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:09:27.840425  369577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:09:27.847564  369577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:09:27.854787  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:27.944777  369577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:09:28.086649  369577 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:09:28.086709  369577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:09:28.090736  369577 start.go:564] Will wait 60s for crictl version
	I1201 20:09:28.090798  369577 ssh_runner.go:195] Run: which crictl
	I1201 20:09:28.094303  369577 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1201 20:09:28.118835  369577 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1201 20:09:28.118914  369577 ssh_runner.go:195] Run: crio --version
	I1201 20:09:28.145870  369577 ssh_runner.go:195] Run: crio --version
	I1201 20:09:28.174675  369577 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1201 20:09:28.175801  369577 cli_runner.go:164] Run: docker network inspect newest-cni-456990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1201 20:09:28.193466  369577 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1201 20:09:28.197584  369577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:28.209396  369577 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1201 20:09:28.210659  369577 kubeadm.go:884] updating cluster {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:09:28.210796  369577 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:09:28.210848  369577 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:09:28.241698  369577 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:09:28.241718  369577 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:09:28.241727  369577 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1201 20:09:28.241822  369577 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-456990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:09:28.241897  369577 ssh_runner.go:195] Run: crio config
	I1201 20:09:28.288940  369577 cni.go:84] Creating CNI manager for ""
	I1201 20:09:28.288962  369577 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1201 20:09:28.288978  369577 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1201 20:09:28.289003  369577 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-456990 NodeName:newest-cni-456990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:09:28.289139  369577 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-456990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:09:28.289213  369577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1201 20:09:28.297792  369577 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:09:28.297839  369577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:09:28.307851  369577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1201 20:09:28.324364  369577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1201 20:09:28.336458  369577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1201 20:09:28.348629  369577 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1201 20:09:28.351983  369577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:09:28.361836  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:28.448911  369577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:28.474045  369577 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990 for IP: 192.168.76.2
	I1201 20:09:28.474066  369577 certs.go:195] generating shared ca certs ...
	I1201 20:09:28.474085  369577 certs.go:227] acquiring lock for ca certs: {Name:mk6fb4446d157658d1e5b34b3c73d17b60f90690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:28.474246  369577 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key
	I1201 20:09:28.474327  369577 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key
	I1201 20:09:28.474342  369577 certs.go:257] generating profile certs ...
	I1201 20:09:28.474437  369577 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/client.key
	I1201 20:09:28.474521  369577 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key.79f10757
	I1201 20:09:28.474577  369577 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key
	I1201 20:09:28.474743  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem (1338 bytes)
	W1201 20:09:28.474794  369577 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873_empty.pem, impossibly tiny 0 bytes
	I1201 20:09:28.474809  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca-key.pem (1675 bytes)
	I1201 20:09:28.474853  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/ca.pem (1082 bytes)
	I1201 20:09:28.474889  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:09:28.474924  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/certs/key.pem (1675 bytes)
	I1201 20:09:28.474982  369577 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem (1708 bytes)
	I1201 20:09:28.475624  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:09:28.496424  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1201 20:09:28.515406  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:09:28.534645  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:09:28.557394  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1201 20:09:28.575824  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:09:28.592501  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:09:28.608549  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/newest-cni-456990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:09:28.624765  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/ssl/certs/168732.pem --> /usr/share/ca-certificates/168732.pem (1708 bytes)
	I1201 20:09:28.640559  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:09:28.657592  369577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-13091/.minikube/certs/16873.pem --> /usr/share/ca-certificates/16873.pem (1338 bytes)
	I1201 20:09:28.675267  369577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:09:28.686884  369577 ssh_runner.go:195] Run: openssl version
	I1201 20:09:28.692748  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16873.pem && ln -fs /usr/share/ca-certificates/16873.pem /etc/ssl/certs/16873.pem"
	I1201 20:09:28.700669  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.704098  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:24 /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.704138  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16873.pem
	I1201 20:09:28.737763  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16873.pem /etc/ssl/certs/51391683.0"
	I1201 20:09:28.746239  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168732.pem && ln -fs /usr/share/ca-certificates/168732.pem /etc/ssl/certs/168732.pem"
	I1201 20:09:28.754672  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.758325  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:24 /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.758382  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168732.pem
	I1201 20:09:28.794154  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168732.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:09:28.802236  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:09:28.810900  369577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.814671  369577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:06 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.814728  369577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:09:28.849049  369577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:09:28.857127  369577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:09:28.860939  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:09:28.895833  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:09:28.930763  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:09:28.964635  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:09:29.008623  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:09:29.049534  369577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:09:29.099499  369577 kubeadm.go:401] StartCluster: {Name:newest-cni-456990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-456990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:09:29.099618  369577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:09:29.099673  369577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:09:29.150581  369577 cri.go:89] found id: "1417580c3497ca25a45487d338a1cc71b8a58465f823121b513347c144eee2f7"
	I1201 20:09:29.150604  369577 cri.go:89] found id: "daab845ade1685423ec3afbbac2c0687b487468dbfe13e4dd079ef36264b1f9b"
	I1201 20:09:29.150609  369577 cri.go:89] found id: "b6856377ff536b29c4763611ef889cb5638e3ac498d777d3699f2afd790303be"
	I1201 20:09:29.150614  369577 cri.go:89] found id: "392fe0a49d21c21c2e3ae9e47a9b34e7303e162785d520c5de9f5eaec0c8e13b"
	I1201 20:09:29.150618  369577 cri.go:89] found id: ""
	I1201 20:09:29.150664  369577 ssh_runner.go:195] Run: sudo runc list -f json
	W1201 20:09:29.164173  369577 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T20:09:29Z" level=error msg="open /run/runc: no such file or directory"
	I1201 20:09:29.164257  369577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:09:29.173942  369577 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:09:29.173960  369577 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:09:29.174005  369577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:09:29.183058  369577 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:09:29.184150  369577 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-456990" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:29.184912  369577 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-13091/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-456990" cluster setting kubeconfig missing "newest-cni-456990" context setting]
	I1201 20:09:29.185982  369577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.188022  369577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:09:29.197072  369577 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1201 20:09:29.197113  369577 kubeadm.go:602] duration metric: took 23.134156ms to restartPrimaryControlPlane
	I1201 20:09:29.197123  369577 kubeadm.go:403] duration metric: took 97.633003ms to StartCluster
	I1201 20:09:29.197139  369577 settings.go:142] acquiring lock: {Name:mk0ef2c6f955a3aebb6e1bdf8ac10660f3895858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.197207  369577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:09:29.199443  369577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-13091/kubeconfig: {Name:mk9f556cccf290218a8500ac730419527a09976f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:09:29.199703  369577 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:09:29.199769  369577 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:09:29.199865  369577 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-456990"
	I1201 20:09:29.199885  369577 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-456990"
	W1201 20:09:29.199893  369577 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:09:29.199920  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.199928  369577 config.go:182] Loaded profile config "newest-cni-456990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 20:09:29.199931  369577 addons.go:70] Setting dashboard=true in profile "newest-cni-456990"
	I1201 20:09:29.199951  369577 addons.go:239] Setting addon dashboard=true in "newest-cni-456990"
	W1201 20:09:29.199959  369577 addons.go:248] addon dashboard should already be in state true
	I1201 20:09:29.199970  369577 addons.go:70] Setting default-storageclass=true in profile "newest-cni-456990"
	I1201 20:09:29.199984  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.199985  369577 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-456990"
	I1201 20:09:29.200260  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.200479  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.200487  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.201913  369577 out.go:179] * Verifying Kubernetes components...
	I1201 20:09:29.203109  369577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:09:29.227872  369577 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1201 20:09:29.228002  369577 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:09:29.228898  369577 addons.go:239] Setting addon default-storageclass=true in "newest-cni-456990"
	W1201 20:09:29.228919  369577 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:09:29.228944  369577 host.go:66] Checking if "newest-cni-456990" exists ...
	I1201 20:09:29.229409  369577 cli_runner.go:164] Run: docker container inspect newest-cni-456990 --format={{.State.Status}}
	I1201 20:09:29.229522  369577 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:29.229537  369577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:09:29.229584  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.230745  369577 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1201 20:09:29.232822  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1201 20:09:29.232838  369577 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1201 20:09:29.232934  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.270464  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.270464  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.271089  369577 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:29.271109  369577 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:09:29.271168  369577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-456990
	I1201 20:09:29.299544  369577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/newest-cni-456990/id_rsa Username:docker}
	I1201 20:09:29.374473  369577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:09:29.393341  369577 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:09:29.393411  369577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:09:29.397957  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1201 20:09:29.397976  369577 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1201 20:09:29.401460  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:09:29.414861  369577 api_server.go:72] duration metric: took 215.119797ms to wait for apiserver process to appear ...
	I1201 20:09:29.414970  369577 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:09:29.415004  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:29.418380  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1201 20:09:29.418401  369577 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1201 20:09:29.422686  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:09:29.442227  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1201 20:09:29.442256  369577 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1201 20:09:29.462696  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1201 20:09:29.462720  369577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1201 20:09:29.488037  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1201 20:09:29.488054  369577 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1201 20:09:29.503571  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1201 20:09:29.503606  369577 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1201 20:09:29.520206  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1201 20:09:29.520228  369577 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1201 20:09:29.535881  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1201 20:09:29.535904  369577 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1201 20:09:29.552205  369577 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:29.552229  369577 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1201 20:09:29.569173  369577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1201 20:09:30.447688  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:09:30.447714  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:09:30.447729  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:30.491568  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:09:30.491608  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:09:30.915119  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:30.920667  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:30.920698  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:31.073336  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.671812187s)
	I1201 20:09:31.073416  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.650688692s)
	I1201 20:09:31.073529  369577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.504317755s)
	I1201 20:09:31.074936  369577 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-456990 addons enable metrics-server
	
	I1201 20:09:31.086132  369577 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1201 20:09:31.087441  369577 addons.go:530] duration metric: took 1.88767322s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1201 20:09:31.415255  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:31.419239  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:09:31.419264  369577 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:09:31.915470  369577 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1201 20:09:31.920415  369577 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1201 20:09:31.921522  369577 api_server.go:141] control plane version: v1.35.0-beta.0
	I1201 20:09:31.921546  369577 api_server.go:131] duration metric: took 2.506562046s to wait for apiserver health ...
	I1201 20:09:31.921555  369577 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:09:31.925533  369577 system_pods.go:59] 8 kube-system pods found
	I1201 20:09:31.925565  369577 system_pods.go:61] "coredns-7d764666f9-6t6ld" [f432ca97-c9f1-42a0-999c-c7b0c90658c1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:31.925575  369577 system_pods.go:61] "etcd-newest-cni-456990" [4ab9e88c-f019-49cb-b3b4-0ca5fe01e5bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:09:31.925588  369577 system_pods.go:61] "kindnet-gbbwm" [7386a806-e262-4de4-827f-fcc08a786840] Running
	I1201 20:09:31.925605  369577 system_pods.go:61] "kube-apiserver-newest-cni-456990" [f3b68723-7bb4-4725-9863-334f5bb8e2ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:09:31.925615  369577 system_pods.go:61] "kube-controller-manager-newest-cni-456990" [105b14f4-dc98-400c-b035-c01fff9181ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:09:31.925621  369577 system_pods.go:61] "kube-proxy-gmbzw" [b60069ca-4117-475a-9a2f-5ecd18fca600] Running
	I1201 20:09:31.925634  369577 system_pods.go:61] "kube-scheduler-newest-cni-456990" [d4eea582-e65e-440d-9d3e-05c34228b6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:09:31.925643  369577 system_pods.go:61] "storage-provisioner" [7a437438-9384-461e-9867-0fadcabcfea6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1201 20:09:31.925653  369577 system_pods.go:74] duration metric: took 4.093389ms to wait for pod list to return data ...
	I1201 20:09:31.925664  369577 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:09:31.928075  369577 default_sa.go:45] found service account: "default"
	I1201 20:09:31.928096  369577 default_sa.go:55] duration metric: took 2.423245ms for default service account to be created ...
	I1201 20:09:31.928110  369577 kubeadm.go:587] duration metric: took 2.728376297s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 20:09:31.928130  369577 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:09:31.930417  369577 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1201 20:09:31.930440  369577 node_conditions.go:123] node cpu capacity is 8
	I1201 20:09:31.930454  369577 node_conditions.go:105] duration metric: took 2.318192ms to run NodePressure ...
	I1201 20:09:31.930467  369577 start.go:242] waiting for startup goroutines ...
	I1201 20:09:31.930480  369577 start.go:247] waiting for cluster config update ...
	I1201 20:09:31.930496  369577 start.go:256] writing updated cluster config ...
	I1201 20:09:31.930881  369577 ssh_runner.go:195] Run: rm -f paused
	I1201 20:09:31.982349  369577 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1201 20:09:31.984030  369577 out.go:179] * Done! kubectl is now configured to use "newest-cni-456990" cluster and "default" namespace by default
	W1201 20:09:29.388771  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:31.888825  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:34.387216  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:36.387620  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:38.887396  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:40.887546  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	W1201 20:09:42.887792  363421 pod_ready.go:104] pod "coredns-66bc5c9577-hf646" is not "Ready", error: <nil>
	I1201 20:09:44.387970  363421 pod_ready.go:94] pod "coredns-66bc5c9577-hf646" is "Ready"
	I1201 20:09:44.387998  363421 pod_ready.go:86] duration metric: took 36.005947971s for pod "coredns-66bc5c9577-hf646" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.390521  363421 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.394349  363421 pod_ready.go:94] pod "etcd-default-k8s-diff-port-009682" is "Ready"
	I1201 20:09:44.394376  363421 pod_ready.go:86] duration metric: took 3.831228ms for pod "etcd-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.396320  363421 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.402040  363421 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-009682" is "Ready"
	I1201 20:09:44.402063  363421 pod_ready.go:86] duration metric: took 5.717196ms for pod "kube-apiserver-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.403774  363421 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.586644  363421 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-009682" is "Ready"
	I1201 20:09:44.586669  363421 pod_ready.go:86] duration metric: took 182.875387ms for pod "kube-controller-manager-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:44.786881  363421 pod_ready.go:83] waiting for pod "kube-proxy-fjn7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:45.186372  363421 pod_ready.go:94] pod "kube-proxy-fjn7h" is "Ready"
	I1201 20:09:45.186399  363421 pod_ready.go:86] duration metric: took 399.491533ms for pod "kube-proxy-fjn7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:45.386608  363421 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:45.786541  363421 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-009682" is "Ready"
	I1201 20:09:45.786569  363421 pod_ready.go:86] duration metric: took 399.93667ms for pod "kube-scheduler-default-k8s-diff-port-009682" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:09:45.786585  363421 pod_ready.go:40] duration metric: took 37.407704581s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:09:45.828303  363421 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 20:09:45.830061  363421 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-009682" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 01 20:09:18 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:18.132611776Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 01 20:09:18 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:18.135925561Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 01 20:09:18 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:18.135952817Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.275189721Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4fd3fdb9-4f4d-406b-8b9b-3fb7062f8e98 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.276268569Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=26434a0b-c41a-4a8c-bbaa-e0f5bfb8a28a name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.277435957Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl/dashboard-metrics-scraper" id=8c5faeab-5e6b-456e-93c5-bca616647297 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.277579775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.283655481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.284210784Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.311692635Z" level=info msg="Created container feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl/dashboard-metrics-scraper" id=8c5faeab-5e6b-456e-93c5-bca616647297 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.312324797Z" level=info msg="Starting container: feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad" id=7a4034ba-108a-44af-a16c-488b1bbc23b0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.314563071Z" level=info msg="Started container" PID=1765 containerID=feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl/dashboard-metrics-scraper id=7a4034ba-108a-44af-a16c-488b1bbc23b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=37efaefe205f98fe873b541457424cb2628d086adfb38212fd7ba6aa4d161e07
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.392275597Z" level=info msg="Removing container: c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74" id=da91aeb5-62f8-4e00-9dd5-a3ae0ac86219 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:09:35 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:35.40634904Z" level=info msg="Removed container c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl/dashboard-metrics-scraper" id=da91aeb5-62f8-4e00-9dd5-a3ae0ac86219 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.40227559Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=19e1c516-5704-4b64-ab3a-63b132657058 name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.403144403Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0a34d482-6836-463b-8af2-da2814967fad name=/runtime.v1.ImageService/ImageStatus
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.404195655Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a0885c5f-fda4-4496-bc61-4ea13b43583f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.404360055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.408673639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.408807684Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/12e65bfc4b0e58e884e9a4b4bf044268a9d8bdaa44cf746673865592d9405b62/merged/etc/passwd: no such file or directory"
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.408829168Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/12e65bfc4b0e58e884e9a4b4bf044268a9d8bdaa44cf746673865592d9405b62/merged/etc/group: no such file or directory"
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.409949796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.436750757Z" level=info msg="Created container ea7770d9081d4f385b7e8df2d5352f5dab56d0549b0af8c406bf9f584f63e791: kube-system/storage-provisioner/storage-provisioner" id=a0885c5f-fda4-4496-bc61-4ea13b43583f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.437379293Z" level=info msg="Starting container: ea7770d9081d4f385b7e8df2d5352f5dab56d0549b0af8c406bf9f584f63e791" id=f4a4739f-3ba4-43be-a05f-afbd7142cda7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 01 20:09:38 default-k8s-diff-port-009682 crio[566]: time="2025-12-01T20:09:38.439516486Z" level=info msg="Started container" PID=1779 containerID=ea7770d9081d4f385b7e8df2d5352f5dab56d0549b0af8c406bf9f584f63e791 description=kube-system/storage-provisioner/storage-provisioner id=f4a4739f-3ba4-43be-a05f-afbd7142cda7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bc9bd1252e530e477cb7d89c311ff48537ea04a23d6a7e4031a30b7c51aa80b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	ea7770d9081d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   1bc9bd1252e53       storage-provisioner                                    kube-system
	feda1b78f267e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   37efaefe205f9       dashboard-metrics-scraper-6ffb444bf9-g9xdl             kubernetes-dashboard
	c9b4c204afd5c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   5b4044f34aeed       kubernetes-dashboard-855c9754f9-s6hvn                  kubernetes-dashboard
	2ae4eee24c0b7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   f12f354f99343       coredns-66bc5c9577-hf646                               kube-system
	c3d03f6faa71d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   9c4b5f2125399       busybox                                                default
	12ce190d184fd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   9389a79841665       kindnet-pqt6x                                          kube-system
	6b13013ae0b35       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   1bc9bd1252e53       storage-provisioner                                    kube-system
	3ddc74de106d8       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           55 seconds ago      Running             kube-proxy                  0                   98e7bcc384682       kube-proxy-fjn7h                                       kube-system
	ef4ba8d77dd0e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   d81787591073e       etcd-default-k8s-diff-port-009682                      kube-system
	b15229721c1e0       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           57 seconds ago      Running             kube-controller-manager     0                   5cc98afc5c828       kube-controller-manager-default-k8s-diff-port-009682   kube-system
	a1e60ba950826       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           57 seconds ago      Running             kube-scheduler              0                   4be37620627d3       kube-scheduler-default-k8s-diff-port-009682            kube-system
	c037673fa52f7       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           57 seconds ago      Running             kube-apiserver              0                   213be4f0a0151       kube-apiserver-default-k8s-diff-port-009682            kube-system
	
	
	==> coredns [2ae4eee24c0b716e3bb04fe195edb9f8f48409b1e41fc2315fe6778ea470078e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37527 - 9075 "HINFO IN 2514835802269368415.1494518536876060678. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019698164s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-009682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-009682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=default-k8s-diff-port-009682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_08_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:08:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-009682
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:09:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:09:37 +0000   Mon, 01 Dec 2025 20:08:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:09:37 +0000   Mon, 01 Dec 2025 20:08:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:09:37 +0000   Mon, 01 Dec 2025 20:08:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:09:37 +0000   Mon, 01 Dec 2025 20:08:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-009682
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                05b6424d-f307-4593-b87d-4cd8ab421755
	  Boot ID:                    7cbb49c2-84ec-48ae-85bb-dc0694824f4d
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-hf646                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-009682                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-pqt6x                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-009682             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-009682    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-fjn7h                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-009682             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-g9xdl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-s6hvn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node default-k8s-diff-port-009682 event: Registered Node default-k8s-diff-port-009682 in Controller
	  Normal  NodeReady                97s                kubelet          Node default-k8s-diff-port-009682 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-009682 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node default-k8s-diff-port-009682 event: Registered Node default-k8s-diff-port-009682 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[ +14.146700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 74 e3 28 9e 50 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 a9 bd 5a 04 53 08 06
	[Dec 1 20:06] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 31 f8 d4 0a 78 08 06
	[  +0.137791] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +28.431513] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	[  +0.000248] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 44 8d df 72 e7 08 06
	[ +15.024511] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 a5 7e fd 1a 59 08 06
	[  +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 02 f7 ac e1 1e 08 06
	[ +13.069990] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 4a 02 76 71 96 08 06
	[  +0.000301] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 12 8e 7a 4b 60 08 06
	[Dec 1 20:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 71 17 f2 39 85 08 06
	[  +0.000464] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 59 c1 80 49 c4 08 06
	
	
	==> etcd [ef4ba8d77dd0e9071c7b175fb62f22f9aa86ca30b16bb6d7363c6dc686aac62e] <==
	{"level":"warn","ts":"2025-12-01T20:09:06.116736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.123940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.130194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.146576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.152789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.159462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.166472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.174230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.180501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.191007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.199449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.207072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.213852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.220764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.227225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.243665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.251547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.258393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.265100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.273365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.280547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.293761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.301118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.308030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:09:06.355283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44436","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:10:02 up  1:52,  0 user,  load average: 2.83, 3.19, 2.37
	Linux default-k8s-diff-port-009682 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [12ce190d184fd2b48b686cd30aba07a65276dceda74d844c9c56396d7dfbd86a] <==
	I1201 20:09:07.817403       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1201 20:09:07.817678       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1201 20:09:07.817905       1 main.go:148] setting mtu 1500 for CNI 
	I1201 20:09:07.817930       1 main.go:178] kindnetd IP family: "ipv4"
	I1201 20:09:07.817953       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-01T20:09:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1201 20:09:08.115999       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1201 20:09:08.116091       1 controller.go:381] "Waiting for informer caches to sync"
	I1201 20:09:08.116111       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1201 20:09:08.116536       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1201 20:09:08.516234       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1201 20:09:08.516268       1 metrics.go:72] Registering metrics
	I1201 20:09:08.516356       1 controller.go:711] "Syncing nftables rules"
	I1201 20:09:18.116785       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1201 20:09:18.116839       1 main.go:301] handling current node
	I1201 20:09:28.120377       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1201 20:09:28.120411       1 main.go:301] handling current node
	I1201 20:09:38.116703       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1201 20:09:38.116752       1 main.go:301] handling current node
	I1201 20:09:48.116981       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1201 20:09:48.117015       1 main.go:301] handling current node
	I1201 20:09:58.116502       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1201 20:09:58.116566       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c037673fa52f79aa510971b202ef75f7b96fdef9c3fc063c32e8c7ef0d11996a] <==
	I1201 20:09:06.845356       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1201 20:09:06.845394       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1201 20:09:06.845479       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1201 20:09:06.845517       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1201 20:09:06.845526       1 aggregator.go:171] initial CRD sync complete...
	I1201 20:09:06.845559       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 20:09:06.845641       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:09:06.845668       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:09:06.845870       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1201 20:09:06.851524       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1201 20:09:06.853358       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1201 20:09:06.862827       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:09:06.879438       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:09:07.115992       1 controller.go:667] quota admission added evaluator for: namespaces
	I1201 20:09:07.145634       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:09:07.162727       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:09:07.169839       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:09:07.176546       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:09:07.209895       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.19.230"}
	I1201 20:09:07.219215       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.102.78"}
	I1201 20:09:07.747641       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:09:10.213847       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1201 20:09:10.665429       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:09:10.715658       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:09:10.715658       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b15229721c1e0a47f1f11b128c387218e176a2618444bdeec996eb0d113098d4] <==
	I1201 20:09:10.128789       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:09:10.130919       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1201 20:09:10.133219       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1201 20:09:10.153563       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1201 20:09:10.154815       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1201 20:09:10.158847       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1201 20:09:10.160046       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1201 20:09:10.160181       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1201 20:09:10.160351       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1201 20:09:10.160418       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1201 20:09:10.160427       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1201 20:09:10.160439       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1201 20:09:10.160507       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1201 20:09:10.160526       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1201 20:09:10.160540       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:09:10.160548       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1201 20:09:10.160555       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1201 20:09:10.165266       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:09:10.172517       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1201 20:09:10.172570       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 20:09:10.172629       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 20:09:10.172637       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 20:09:10.172644       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 20:09:10.176771       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1201 20:09:10.186092       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3ddc74de106d8b1e6831a89821ee7f38d0e15ccfbc45495499f68e1e8d0c4728] <==
	I1201 20:09:07.675188       1 server_linux.go:53] "Using iptables proxy"
	I1201 20:09:07.769628       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 20:09:07.869896       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 20:09:07.869948       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1201 20:09:07.870049       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:09:07.891898       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1201 20:09:07.891960       1 server_linux.go:132] "Using iptables Proxier"
	I1201 20:09:07.898263       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:09:07.898650       1 server.go:527] "Version info" version="v1.34.2"
	I1201 20:09:07.898675       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:09:07.902444       1 config.go:200] "Starting service config controller"
	I1201 20:09:07.902522       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:09:07.902658       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:09:07.902990       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:09:07.902947       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:09:07.903453       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:09:07.903133       1 config.go:309] "Starting node config controller"
	I1201 20:09:07.903478       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:09:07.903484       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:09:08.002732       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:09:08.003570       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 20:09:08.003589       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a1e60ba95082677ce609ab21f3eb49bcc9e9c4f2b4507d8317ccd30fb12c9a8d] <==
	I1201 20:09:05.620226       1 serving.go:386] Generated self-signed cert in-memory
	W1201 20:09:06.767620       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1201 20:09:06.767726       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1201 20:09:06.767742       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1201 20:09:06.767751       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1201 20:09:06.804397       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1201 20:09:06.804501       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:09:06.808107       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 20:09:06.808233       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1201 20:09:06.810035       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:09:06.810093       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:09:06.911418       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 20:09:10 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:10.899647     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf49b\" (UniqueName: \"kubernetes.io/projected/c8483a44-6cc7-4129-95e3-734c4b95302a-kube-api-access-kf49b\") pod \"dashboard-metrics-scraper-6ffb444bf9-g9xdl\" (UID: \"c8483a44-6cc7-4129-95e3-734c4b95302a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl"
	Dec 01 20:09:10 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:10.899683     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwfn6\" (UniqueName: \"kubernetes.io/projected/cc861f8f-612e-438c-af44-6b614122609d-kube-api-access-hwfn6\") pod \"kubernetes-dashboard-855c9754f9-s6hvn\" (UID: \"cc861f8f-612e-438c-af44-6b614122609d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s6hvn"
	Dec 01 20:09:14 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:14.118270     732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 01 20:09:14 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:14.331188     732 scope.go:117] "RemoveContainer" containerID="1419dd4c2a0a3234a50c87f48f3aacb4e29fe775a48a09f97fa69d747d19ac7c"
	Dec 01 20:09:15 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:15.335535     732 scope.go:117] "RemoveContainer" containerID="1419dd4c2a0a3234a50c87f48f3aacb4e29fe775a48a09f97fa69d747d19ac7c"
	Dec 01 20:09:15 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:15.335682     732 scope.go:117] "RemoveContainer" containerID="c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74"
	Dec 01 20:09:15 default-k8s-diff-port-009682 kubelet[732]: E1201 20:09:15.335881     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g9xdl_kubernetes-dashboard(c8483a44-6cc7-4129-95e3-734c4b95302a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl" podUID="c8483a44-6cc7-4129-95e3-734c4b95302a"
	Dec 01 20:09:16 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:16.339875     732 scope.go:117] "RemoveContainer" containerID="c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74"
	Dec 01 20:09:16 default-k8s-diff-port-009682 kubelet[732]: E1201 20:09:16.340082     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g9xdl_kubernetes-dashboard(c8483a44-6cc7-4129-95e3-734c4b95302a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl" podUID="c8483a44-6cc7-4129-95e3-734c4b95302a"
	Dec 01 20:09:18 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:18.356604     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s6hvn" podStartSLOduration=1.691257458 podStartE2EDuration="8.356577831s" podCreationTimestamp="2025-12-01 20:09:10 +0000 UTC" firstStartedPulling="2025-12-01 20:09:11.124706848 +0000 UTC m=+6.965257035" lastFinishedPulling="2025-12-01 20:09:17.790027226 +0000 UTC m=+13.630577408" observedRunningTime="2025-12-01 20:09:18.356175361 +0000 UTC m=+14.196725565" watchObservedRunningTime="2025-12-01 20:09:18.356577831 +0000 UTC m=+14.197128033"
	Dec 01 20:09:22 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:22.137563     732 scope.go:117] "RemoveContainer" containerID="c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74"
	Dec 01 20:09:22 default-k8s-diff-port-009682 kubelet[732]: E1201 20:09:22.137740     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g9xdl_kubernetes-dashboard(c8483a44-6cc7-4129-95e3-734c4b95302a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl" podUID="c8483a44-6cc7-4129-95e3-734c4b95302a"
	Dec 01 20:09:35 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:35.274658     732 scope.go:117] "RemoveContainer" containerID="c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74"
	Dec 01 20:09:35 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:35.390961     732 scope.go:117] "RemoveContainer" containerID="c0cf91b5f04fa9b17016f71532cae5c7bf265ef01ac6d765f93b6046511bcd74"
	Dec 01 20:09:35 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:35.391187     732 scope.go:117] "RemoveContainer" containerID="feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad"
	Dec 01 20:09:35 default-k8s-diff-port-009682 kubelet[732]: E1201 20:09:35.391426     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g9xdl_kubernetes-dashboard(c8483a44-6cc7-4129-95e3-734c4b95302a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl" podUID="c8483a44-6cc7-4129-95e3-734c4b95302a"
	Dec 01 20:09:38 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:38.401916     732 scope.go:117] "RemoveContainer" containerID="6b13013ae0b35c020548949e4bcb3099b0f4eff47e49c2cd079f0ce044863030"
	Dec 01 20:09:42 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:42.136700     732 scope.go:117] "RemoveContainer" containerID="feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad"
	Dec 01 20:09:42 default-k8s-diff-port-009682 kubelet[732]: E1201 20:09:42.137011     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g9xdl_kubernetes-dashboard(c8483a44-6cc7-4129-95e3-734c4b95302a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl" podUID="c8483a44-6cc7-4129-95e3-734c4b95302a"
	Dec 01 20:09:55 default-k8s-diff-port-009682 kubelet[732]: I1201 20:09:55.274873     732 scope.go:117] "RemoveContainer" containerID="feda1b78f267e3d452488ed8434ab5833cccafa578ecdcea094b253f6f5401ad"
	Dec 01 20:09:55 default-k8s-diff-port-009682 kubelet[732]: E1201 20:09:55.275047     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g9xdl_kubernetes-dashboard(c8483a44-6cc7-4129-95e3-734c4b95302a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g9xdl" podUID="c8483a44-6cc7-4129-95e3-734c4b95302a"
	Dec 01 20:09:57 default-k8s-diff-port-009682 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 01 20:09:57 default-k8s-diff-port-009682 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 01 20:09:57 default-k8s-diff-port-009682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 01 20:09:57 default-k8s-diff-port-009682 systemd[1]: kubelet.service: Consumed 1.684s CPU time.
	
	
	==> kubernetes-dashboard [c9b4c204afd5c940c6070aab1b2e47561696de1a1705d5cc7e859b99dffa2266] <==
	2025/12/01 20:09:17 Using namespace: kubernetes-dashboard
	2025/12/01 20:09:17 Using in-cluster config to connect to apiserver
	2025/12/01 20:09:17 Using secret token for csrf signing
	2025/12/01 20:09:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/01 20:09:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/01 20:09:17 Successful initial request to the apiserver, version: v1.34.2
	2025/12/01 20:09:17 Generating JWE encryption key
	2025/12/01 20:09:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/01 20:09:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/01 20:09:18 Initializing JWE encryption key from synchronized object
	2025/12/01 20:09:18 Creating in-cluster Sidecar client
	2025/12/01 20:09:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/01 20:09:18 Serving insecurely on HTTP port: 9090
	2025/12/01 20:09:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/01 20:09:17 Starting overwatch
	
	
	==> storage-provisioner [6b13013ae0b35c020548949e4bcb3099b0f4eff47e49c2cd079f0ce044863030] <==
	I1201 20:09:07.642372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1201 20:09:37.645487       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ea7770d9081d4f385b7e8df2d5352f5dab56d0549b0af8c406bf9f584f63e791] <==
	I1201 20:09:38.452698       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1201 20:09:38.460256       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1201 20:09:38.460337       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1201 20:09:38.462713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:41.917607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:46.177898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:49.776195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:52.830080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:55.852145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:55.856751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:09:55.856921       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1201 20:09:55.857057       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-009682_6b281026-9672-4268-ab3e-c9ef7cacc91f!
	I1201 20:09:55.857058       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40bc02c5-697a-4268-94f8-e188e6079112", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-009682_6b281026-9672-4268-ab3e-c9ef7cacc91f became leader
	W1201 20:09:55.858929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:55.862636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1201 20:09:55.957334       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-009682_6b281026-9672-4268-ab3e-c9ef7cacc91f!
	W1201 20:09:57.865585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:57.871057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:59.875143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:09:59.879488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:10:01.882159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 20:10:01.886197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682: exit status 2 (320.985249ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-009682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.06s)

                                                
                                    

Test pass (334/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.26
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 2.57
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.26
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.4
30 TestBinaryMirror 0.82
31 TestOffline 64.58
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 121.28
40 TestAddons/serial/GCPAuth/Namespaces 0.17
41 TestAddons/serial/GCPAuth/FakeCredentials 9.42
57 TestAddons/StoppedEnableDisable 16.74
58 TestCertOptions 34.04
59 TestCertExpiration 214.92
61 TestForceSystemdFlag 29.18
62 TestForceSystemdEnv 30.96
67 TestErrorSpam/setup 18.83
68 TestErrorSpam/start 0.67
69 TestErrorSpam/status 0.96
70 TestErrorSpam/pause 5.71
71 TestErrorSpam/unpause 6.11
72 TestErrorSpam/stop 8.14
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 67.43
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.19
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.59
84 TestFunctional/serial/CacheCmd/cache/add_local 0.79
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 46.43
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.17
95 TestFunctional/serial/LogsFileCmd 1.21
96 TestFunctional/serial/InvalidService 4.08
98 TestFunctional/parallel/ConfigCmd 0.47
99 TestFunctional/parallel/DashboardCmd 7.55
100 TestFunctional/parallel/DryRun 0.38
101 TestFunctional/parallel/InternationalLanguage 0.17
102 TestFunctional/parallel/StatusCmd 0.94
107 TestFunctional/parallel/AddonsCmd 0.19
108 TestFunctional/parallel/PersistentVolumeClaim 26.18
110 TestFunctional/parallel/SSHCmd 0.61
111 TestFunctional/parallel/CpCmd 2.05
112 TestFunctional/parallel/MySQL 18.03
113 TestFunctional/parallel/FileSync 0.32
114 TestFunctional/parallel/CertSync 2.11
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
122 TestFunctional/parallel/License 0.23
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.33
131 TestFunctional/parallel/MountCmd/any-port 12.17
132 TestFunctional/parallel/MountCmd/specific-port 2.03
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
139 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
142 TestFunctional/parallel/ProfileCmd/profile_list 0.41
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
144 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
145 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
146 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
147 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
148 TestFunctional/parallel/ImageCommands/ImageBuild 3.85
149 TestFunctional/parallel/ImageCommands/Setup 0.46
154 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
157 TestFunctional/parallel/Version/short 0.06
158 TestFunctional/parallel/Version/components 0.46
159 TestFunctional/parallel/ServiceCmd/List 1.72
160 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 43.14
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.33
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.69
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.72
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.28
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.51
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 59.48
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.23
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.26
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 8.35
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 7.46
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.39
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.08
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 25.35
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.58
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.9
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 15.53
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.29
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.69
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.6
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.27
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.5
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.48
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 7.2
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.44
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.48
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.27
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.24
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 2.72
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.16
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.5
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.13
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 8.24
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.89
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.25
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.17
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.16
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.7
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.7
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 155.26
266 TestMultiControlPlane/serial/DeployApp 4.5
267 TestMultiControlPlane/serial/PingHostFromPods 1
268 TestMultiControlPlane/serial/AddWorkerNode 23.96
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
271 TestMultiControlPlane/serial/CopyFile 17.13
272 TestMultiControlPlane/serial/StopSecondaryNode 19.79
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
274 TestMultiControlPlane/serial/RestartSecondaryNode 14.57
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 104.97
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.52
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
279 TestMultiControlPlane/serial/StopCluster 47.1
280 TestMultiControlPlane/serial/RestartCluster 53.12
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
282 TestMultiControlPlane/serial/AddSecondaryNode 73.4
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
288 TestJSONOutput/start/Command 66.52
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 8.02
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.23
313 TestKicCustomNetwork/create_custom_network 25.74
314 TestKicCustomNetwork/use_default_bridge_network 21.97
315 TestKicExistingNetwork 23.17
316 TestKicCustomSubnet 22.16
317 TestKicStaticIP 24.59
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 48.77
322 TestMountStart/serial/StartWithMountFirst 4.73
323 TestMountStart/serial/VerifyMountFirst 0.27
324 TestMountStart/serial/StartWithMountSecond 4.89
325 TestMountStart/serial/VerifyMountSecond 0.27
326 TestMountStart/serial/DeleteFirst 1.69
327 TestMountStart/serial/VerifyMountPostDelete 0.28
328 TestMountStart/serial/Stop 1.26
329 TestMountStart/serial/RestartStopped 7.46
330 TestMountStart/serial/VerifyMountPostStop 0.27
333 TestMultiNode/serial/FreshStart2Nodes 67.52
334 TestMultiNode/serial/DeployApp2Nodes 3.31
335 TestMultiNode/serial/PingHostFrom2Pods 0.7
336 TestMultiNode/serial/AddNode 53.01
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.67
339 TestMultiNode/serial/CopyFile 9.85
340 TestMultiNode/serial/StopNode 2.24
341 TestMultiNode/serial/StartAfterStop 7.15
342 TestMultiNode/serial/RestartKeepsNodes 79.44
343 TestMultiNode/serial/DeleteNode 5.22
344 TestMultiNode/serial/StopMultiNode 30.38
345 TestMultiNode/serial/RestartMultiNode 25.18
346 TestMultiNode/serial/ValidateNameConflict 22.2
351 TestPreload 102.58
353 TestScheduledStopUnix 98.83
356 TestInsufficientStorage 11.8
357 TestRunningBinaryUpgrade 69.87
359 TestKubernetesUpgrade 330.25
360 TestMissingContainerUpgrade 83.43
362 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
363 TestNoKubernetes/serial/StartWithK8s 42.21
364 TestNoKubernetes/serial/StartWithStopK8s 16.34
365 TestNoKubernetes/serial/Start 7.58
369 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
370 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
371 TestNoKubernetes/serial/ProfileList 3.46
376 TestNetworkPlugins/group/false 5.37
377 TestNoKubernetes/serial/Stop 1.35
378 TestNoKubernetes/serial/StartNoArgs 6.79
382 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
391 TestPause/serial/Start 44.44
392 TestStoppedBinaryUpgrade/Setup 0.44
393 TestStoppedBinaryUpgrade/Upgrade 285.49
394 TestPause/serial/SecondStartNoReconfiguration 5.9
396 TestNetworkPlugins/group/auto/Start 39.2
397 TestNetworkPlugins/group/auto/KubeletFlags 0.29
398 TestNetworkPlugins/group/auto/NetCatPod 9.18
399 TestNetworkPlugins/group/auto/DNS 0.11
400 TestNetworkPlugins/group/auto/Localhost 0.09
401 TestNetworkPlugins/group/auto/HairPin 0.09
402 TestNetworkPlugins/group/kindnet/Start 42.62
403 TestNetworkPlugins/group/kindnet/ControllerPod 6
404 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
405 TestNetworkPlugins/group/kindnet/NetCatPod 8.17
406 TestNetworkPlugins/group/kindnet/DNS 0.12
407 TestNetworkPlugins/group/kindnet/Localhost 0.09
408 TestNetworkPlugins/group/kindnet/HairPin 0.09
409 TestNetworkPlugins/group/calico/Start 47.98
410 TestNetworkPlugins/group/custom-flannel/Start 51.64
411 TestNetworkPlugins/group/calico/ControllerPod 6.01
412 TestNetworkPlugins/group/calico/KubeletFlags 0.29
413 TestNetworkPlugins/group/calico/NetCatPod 9.19
414 TestNetworkPlugins/group/calico/DNS 0.11
415 TestNetworkPlugins/group/calico/Localhost 0.09
416 TestNetworkPlugins/group/calico/HairPin 0.09
417 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
418 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.17
419 TestNetworkPlugins/group/custom-flannel/DNS 0.11
420 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
421 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
422 TestNetworkPlugins/group/enable-default-cni/Start 61.45
423 TestNetworkPlugins/group/flannel/Start 54.01
424 TestStoppedBinaryUpgrade/MinikubeLogs 1.09
425 TestNetworkPlugins/group/bridge/Start 70.26
427 TestStartStop/group/old-k8s-version/serial/FirstStart 50.69
428 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
429 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.17
430 TestNetworkPlugins/group/flannel/ControllerPod 6.01
431 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
432 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
433 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
434 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
435 TestNetworkPlugins/group/flannel/NetCatPod 9.19
436 TestNetworkPlugins/group/flannel/DNS 0.11
437 TestNetworkPlugins/group/flannel/Localhost 0.1
438 TestNetworkPlugins/group/flannel/HairPin 0.11
440 TestStartStop/group/no-preload/serial/FirstStart 46.21
441 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
442 TestNetworkPlugins/group/bridge/NetCatPod 9.27
443 TestStartStop/group/old-k8s-version/serial/DeployApp 10.28
445 TestStartStop/group/embed-certs/serial/FirstStart 40.98
446 TestNetworkPlugins/group/bridge/DNS 0.13
447 TestNetworkPlugins/group/bridge/Localhost 0.09
448 TestNetworkPlugins/group/bridge/HairPin 0.1
450 TestStartStop/group/old-k8s-version/serial/Stop 18.28
452 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.65
453 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
454 TestStartStop/group/old-k8s-version/serial/SecondStart 25.52
455 TestStartStop/group/no-preload/serial/DeployApp 9.25
457 TestStartStop/group/embed-certs/serial/DeployApp 7.22
458 TestStartStop/group/no-preload/serial/Stop 18.63
460 TestStartStop/group/embed-certs/serial/Stop 16.34
461 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 8.01
462 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
463 TestStartStop/group/no-preload/serial/SecondStart 43.92
464 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
465 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.32
466 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
468 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
469 TestStartStop/group/embed-certs/serial/SecondStart 47.12
472 TestStartStop/group/newest-cni/serial/FirstStart 32.77
473 TestStartStop/group/default-k8s-diff-port/serial/Stop 17.84
474 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
475 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.7
476 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
477 TestStartStop/group/newest-cni/serial/DeployApp 0
479 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
480 TestStartStop/group/newest-cni/serial/Stop 8.72
481 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
482 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
484 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
485 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
486 TestStartStop/group/newest-cni/serial/SecondStart 10.43
487 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
489 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
490 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
491 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
493 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
494 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
495 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (4.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-874273 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-874273 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.26088173s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1201 19:05:51.508112   16873 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1201 19:05:51.508190   16873 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-874273
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-874273: exit status 85 (69.684454ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-874273 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-874273 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 19:05:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 19:05:47.304096   16885 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:05:47.304321   16885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:47.304330   16885 out.go:374] Setting ErrFile to fd 2...
	I1201 19:05:47.304334   16885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:47.304519   16885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	W1201 19:05:47.304627   16885 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21997-13091/.minikube/config/config.json: open /home/jenkins/minikube-integration/21997-13091/.minikube/config/config.json: no such file or directory
	I1201 19:05:47.305569   16885 out.go:368] Setting JSON to true
	I1201 19:05:47.306485   16885 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2898,"bootTime":1764613049,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:05:47.306540   16885 start.go:143] virtualization: kvm guest
	I1201 19:05:47.311261   16885 out.go:99] [download-only-874273] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1201 19:05:47.311420   16885 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball: no such file or directory
	I1201 19:05:47.311480   16885 notify.go:221] Checking for updates...
	I1201 19:05:47.312874   16885 out.go:171] MINIKUBE_LOCATION=21997
	I1201 19:05:47.314370   16885 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:05:47.315729   16885 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:05:47.317066   16885 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 19:05:47.318439   16885 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1201 19:05:47.320707   16885 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1201 19:05:47.320951   16885 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:05:47.346559   16885 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 19:05:47.346641   16885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:05:47.732501   16885 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-01 19:05:47.723272005 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:05:47.732618   16885 docker.go:319] overlay module found
	I1201 19:05:47.734268   16885 out.go:99] Using the docker driver based on user configuration
	I1201 19:05:47.734305   16885 start.go:309] selected driver: docker
	I1201 19:05:47.734312   16885 start.go:927] validating driver "docker" against <nil>
	I1201 19:05:47.734384   16885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:05:47.793636   16885 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-01 19:05:47.784916839 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:05:47.793797   16885 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 19:05:47.794326   16885 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1201 19:05:47.794518   16885 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 19:05:47.796322   16885 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-874273 host does not exist
	  To start a cluster, run: "minikube start -p download-only-874273"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-874273
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (2.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-883422 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-883422 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.571208249s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (2.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1201 19:05:54.511585   16873 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1201 19:05:54.511630   16873 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-883422
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-883422: exit status 85 (68.432282ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-874273 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-874273 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ delete  │ -p download-only-874273                                                                                                                                                   │ download-only-874273 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ start   │ -o=json --download-only -p download-only-883422 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-883422 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 19:05:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 19:05:51.991447   17250 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:05:51.991666   17250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:51.991674   17250 out.go:374] Setting ErrFile to fd 2...
	I1201 19:05:51.991678   17250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:51.991865   17250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:05:51.992307   17250 out.go:368] Setting JSON to true
	I1201 19:05:51.993041   17250 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2903,"bootTime":1764613049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:05:51.993087   17250 start.go:143] virtualization: kvm guest
	I1201 19:05:51.994977   17250 out.go:99] [download-only-883422] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:05:51.995078   17250 notify.go:221] Checking for updates...
	I1201 19:05:51.996478   17250 out.go:171] MINIKUBE_LOCATION=21997
	I1201 19:05:51.997695   17250 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:05:51.998942   17250 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:05:52.000083   17250 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 19:05:52.001315   17250 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1201 19:05:52.003476   17250 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1201 19:05:52.003697   17250 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:05:52.026647   17250 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 19:05:52.026748   17250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:05:52.082921   17250 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-01 19:05:52.073407189 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:05:52.083016   17250 docker.go:319] overlay module found
	I1201 19:05:52.084480   17250 out.go:99] Using the docker driver based on user configuration
	I1201 19:05:52.084506   17250 start.go:309] selected driver: docker
	I1201 19:05:52.084516   17250 start.go:927] validating driver "docker" against <nil>
	I1201 19:05:52.084598   17250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:05:52.141578   17250 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-01 19:05:52.132145414 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:05:52.141715   17250 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 19:05:52.142179   17250 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1201 19:05:52.142334   17250 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 19:05:52.144045   17250 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-883422 host does not exist
	  To start a cluster, run: "minikube start -p download-only-883422"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-883422
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-590206 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-590206 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.260006703s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-590206
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-590206: exit status 85 (71.419718ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-874273 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-874273 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ delete  │ -p download-only-874273                                                                                                                                                          │ download-only-874273 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ start   │ -o=json --download-only -p download-only-883422 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-883422 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ delete  │ -p download-only-883422                                                                                                                                                          │ download-only-883422 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ start   │ -o=json --download-only -p download-only-590206 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-590206 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 19:05:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 19:05:54.991714   17602 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:05:54.991955   17602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:54.991963   17602 out.go:374] Setting ErrFile to fd 2...
	I1201 19:05:54.991967   17602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:54.992136   17602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:05:54.992562   17602 out.go:368] Setting JSON to true
	I1201 19:05:54.993302   17602 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2906,"bootTime":1764613049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:05:54.993351   17602 start.go:143] virtualization: kvm guest
	I1201 19:05:54.995256   17602 out.go:99] [download-only-590206] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:05:54.995394   17602 notify.go:221] Checking for updates...
	I1201 19:05:54.996564   17602 out.go:171] MINIKUBE_LOCATION=21997
	I1201 19:05:54.997785   17602 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:05:54.998991   17602 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:05:55.000251   17602 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 19:05:55.001516   17602 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1201 19:05:55.003883   17602 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1201 19:05:55.004135   17602 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:05:55.026344   17602 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 19:05:55.026469   17602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:05:55.078640   17602 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-01 19:05:55.068422532 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:05:55.078741   17602 docker.go:319] overlay module found
	I1201 19:05:55.080427   17602 out.go:99] Using the docker driver based on user configuration
	I1201 19:05:55.080452   17602 start.go:309] selected driver: docker
	I1201 19:05:55.080459   17602 start.go:927] validating driver "docker" against <nil>
	I1201 19:05:55.080543   17602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:05:55.135493   17602 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-01 19:05:55.125967037 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:05:55.135636   17602 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 19:05:55.136111   17602 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1201 19:05:55.136277   17602 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 19:05:55.138123   17602 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-590206 host does not exist
	  To start a cluster, run: "minikube start -p download-only-590206"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-590206
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-082948 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-082948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-082948
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1201 19:05:58.515868   16873 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-325932 --alsologtostderr --binary-mirror http://127.0.0.1:37241 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-325932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-325932
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (64.58s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-665356 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-665356 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m0.094656449s)
helpers_test.go:175: Cleaning up "offline-crio-665356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-665356
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-665356: (4.483483147s)
--- PASS: TestOffline (64.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-844427
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-844427: exit status 85 (62.290512ms)

                                                
                                                
-- stdout --
	* Profile "addons-844427" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-844427"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-844427
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-844427: exit status 85 (62.973536ms)

                                                
                                                
-- stdout --
	* Profile "addons-844427" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-844427"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (121.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-844427 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-844427 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m1.282701313s)
--- PASS: TestAddons/Setup (121.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-844427 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-844427 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-844427 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-844427 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [75ad87fe-d027-4b9e-8a21-f3d54dae5a67] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [75ad87fe-d027-4b9e-8a21-f3d54dae5a67] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.002609682s
addons_test.go:694: (dbg) Run:  kubectl --context addons-844427 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-844427 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-844427 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.74s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-844427
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-844427: (16.452237491s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-844427
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-844427
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-844427
--- PASS: TestAddons/StoppedEnableDisable (16.74s)

                                                
                                    
x
+
TestCertOptions (34.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-488320 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-488320 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (30.822140438s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-488320 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-488320 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-488320 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-488320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-488320
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-488320: (2.466940771s)
--- PASS: TestCertOptions (34.04s)

                                                
                                    
x
+
TestCertExpiration (214.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-453210 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-453210 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.422760529s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-453210 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1201 20:04:04.746345   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-453210 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.043751707s)
helpers_test.go:175: Cleaning up "cert-expiration-453210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-453210
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-453210: (2.450818219s)
--- PASS: TestCertExpiration (214.92s)

                                                
                                    
x
+
TestForceSystemdFlag (29.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-882623 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-882623 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.112656461s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-882623 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-882623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-882623
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-882623: (3.727601043s)
--- PASS: TestForceSystemdFlag (29.18s)

                                                
                                    
x
+
TestForceSystemdEnv (30.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-457376 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-457376 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.212993498s)
helpers_test.go:175: Cleaning up "force-systemd-env-457376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-457376
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-457376: (2.74742433s)
--- PASS: TestForceSystemdEnv (30.96s)

                                                
                                    
x
+
TestErrorSpam/setup (18.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-886853 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-886853 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-886853 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-886853 --driver=docker  --container-runtime=crio: (18.832986929s)
--- PASS: TestErrorSpam/setup (18.83s)

                                                
                                    
x
+
TestErrorSpam/start (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 start --dry-run
--- PASS: TestErrorSpam/start (0.67s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (5.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 pause: exit status 80 (2.058347536s)

                                                
                                                
-- stdout --
	* Pausing node nospam-886853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:11:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 pause: exit status 80 (1.979590169s)

                                                
                                                
-- stdout --
	* Pausing node nospam-886853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:11:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 pause: exit status 80 (1.668028832s)

                                                
                                                
-- stdout --
	* Pausing node nospam-886853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:11:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.11s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 unpause: exit status 80 (2.22880785s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-886853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:11:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 unpause: exit status 80 (2.11043824s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-886853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:11:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 unpause: exit status 80 (1.766801904s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-886853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-01T19:11:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.11s)

                                                
                                    
x
+
TestErrorSpam/stop (8.14s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 stop: (7.929995055s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886853 --log_dir /tmp/nospam-886853 stop
--- PASS: TestErrorSpam/stop (8.14s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/test/nested/copy/16873/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-764481 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-764481 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m7.432430854s)
--- PASS: TestFunctional/serial/StartWithProxy (67.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1201 19:12:59.086884   16873 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-764481 --alsologtostderr -v=8
E1201 19:13:01.299393   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:13:01.306548   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:13:01.317914   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:13:01.339604   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:13:01.381109   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:13:01.463382   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:13:01.624859   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:13:01.946970   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:13:02.588340   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:13:03.870271   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-764481 --alsologtostderr -v=8: (6.189862449s)
functional_test.go:678: soft start took 6.190476165s for "functional-764481" cluster.
I1201 19:13:05.277101   16873 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-764481 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 cache add registry.k8s.io/pause:3.3
E1201 19:13:06.432404   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-764481 /tmp/TestFunctionalserialCacheCmdcacheadd_local3519953256/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 cache add minikube-local-cache-test:functional-764481
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 cache delete minikube-local-cache-test:functional-764481
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-764481
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.521174ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 kubectl -- --context functional-764481 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-764481 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.43s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-764481 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1201 19:13:11.554122   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:13:21.795443   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:13:42.277452   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-764481 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.426003912s)
functional_test.go:776: restart took 46.426109855s for "functional-764481" cluster.
I1201 19:13:57.548822   16873 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (46.43s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-764481 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-764481 logs: (1.174566674s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 logs --file /tmp/TestFunctionalserialLogsFileCmd2054326349/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-764481 logs --file /tmp/TestFunctionalserialLogsFileCmd2054326349/001/logs.txt: (1.207210453s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.08s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-764481 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-764481
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-764481: exit status 115 (347.044711ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31871 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-764481 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 config get cpus: exit status 14 (93.710513ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 config get cpus: exit status 14 (74.637649ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-764481 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-764481 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 56530: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-764481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-764481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (166.755414ms)

                                                
                                                
-- stdout --
	* [functional-764481] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:14:29.874366   56104 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:14:29.874649   56104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:14:29.874660   56104 out.go:374] Setting ErrFile to fd 2...
	I1201 19:14:29.874665   56104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:14:29.874921   56104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:14:29.875422   56104 out.go:368] Setting JSON to false
	I1201 19:14:29.876375   56104 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3421,"bootTime":1764613049,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:14:29.876439   56104 start.go:143] virtualization: kvm guest
	I1201 19:14:29.878465   56104 out.go:179] * [functional-764481] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:14:29.879951   56104 notify.go:221] Checking for updates...
	I1201 19:14:29.879970   56104 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:14:29.881495   56104 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:14:29.882903   56104 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:14:29.884397   56104 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 19:14:29.886676   56104 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:14:29.890571   56104 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:14:29.892510   56104 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:14:29.893031   56104 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:14:29.917270   56104 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 19:14:29.917413   56104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:14:29.973650   56104 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-01 19:14:29.963885127 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:14:29.973752   56104 docker.go:319] overlay module found
	I1201 19:14:29.976335   56104 out.go:179] * Using the docker driver based on existing profile
	I1201 19:14:29.977670   56104 start.go:309] selected driver: docker
	I1201 19:14:29.977695   56104 start.go:927] validating driver "docker" against &{Name:functional-764481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-764481 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:14:29.977770   56104 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:14:29.979436   56104 out.go:203] 
	W1201 19:14:29.980748   56104 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1201 19:14:29.982019   56104 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-764481 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-764481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-764481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (165.418093ms)

                                                
                                                
-- stdout --
	* [functional-764481] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:14:28.765917   55669 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:14:28.766197   55669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:14:28.766208   55669 out.go:374] Setting ErrFile to fd 2...
	I1201 19:14:28.766212   55669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:14:28.766484   55669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:14:28.766911   55669 out.go:368] Setting JSON to false
	I1201 19:14:28.767923   55669 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3420,"bootTime":1764613049,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:14:28.767984   55669 start.go:143] virtualization: kvm guest
	I1201 19:14:28.770070   55669 out.go:179] * [functional-764481] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1201 19:14:28.771431   55669 notify.go:221] Checking for updates...
	I1201 19:14:28.771457   55669 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:14:28.772973   55669 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:14:28.774449   55669 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:14:28.775644   55669 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 19:14:28.779838   55669 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:14:28.780995   55669 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:14:28.782583   55669 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:14:28.783156   55669 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:14:28.806688   55669 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 19:14:28.806783   55669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:14:28.863367   55669 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-01 19:14:28.853168585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:14:28.863460   55669 docker.go:319] overlay module found
	I1201 19:14:28.865244   55669 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1201 19:14:28.866468   55669 start.go:309] selected driver: docker
	I1201 19:14:28.866481   55669 start.go:927] validating driver "docker" against &{Name:functional-764481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-764481 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:14:28.866570   55669 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:14:28.868175   55669 out.go:203] 
	W1201 19:14:28.869246   55669 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1201 19:14:28.870363   55669 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e8e4a483-83a2-44dd-a707-a6b9fe93dc1f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004436039s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-764481 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-764481 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-764481 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-764481 apply -f testdata/storage-provisioner/pod.yaml
I1201 19:14:12.647912   16873 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ef7b06c4-3d77-486a-8247-c027be7cc7a0] Pending
helpers_test.go:352: "sp-pod" [ef7b06c4-3d77-486a-8247-c027be7cc7a0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ef7b06c4-3d77-486a-8247-c027be7cc7a0] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003545935s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-764481 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-764481 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-764481 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c617c17e-3ac6-49b5-bfae-41959b8b3c44] Pending
helpers_test.go:352: "sp-pod" [c617c17e-3ac6-49b5-bfae-41959b8b3c44] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002996941s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-764481 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh -n functional-764481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 cp functional-764481:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1042050918/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh -n functional-764481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh -n functional-764481 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (18.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-764481 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-mmnmh" [3ab688e9-4773-400a-8660-28fba9e415f2] Pending
helpers_test.go:352: "mysql-5bb876957f-mmnmh" [3ab688e9-4773-400a-8660-28fba9e415f2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-mmnmh" [3ab688e9-4773-400a-8660-28fba9e415f2] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.003351339s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-764481 exec mysql-5bb876957f-mmnmh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-764481 exec mysql-5bb876957f-mmnmh -- mysql -ppassword -e "show databases;": exit status 1 (89.392732ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1201 19:14:18.838552   16873 retry.go:31] will retry after 1.354369545s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-764481 exec mysql-5bb876957f-mmnmh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-764481 exec mysql-5bb876957f-mmnmh -- mysql -ppassword -e "show databases;": exit status 1 (105.927837ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1201 19:14:20.299774   16873 retry.go:31] will retry after 2.159123332s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-764481 exec mysql-5bb876957f-mmnmh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (18.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/16873/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "sudo cat /etc/test/nested/copy/16873/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/16873.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "sudo cat /etc/ssl/certs/16873.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/16873.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "sudo cat /usr/share/ca-certificates/16873.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/168732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "sudo cat /etc/ssl/certs/168732.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/168732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "sudo cat /usr/share/ca-certificates/168732.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-764481 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 ssh "sudo systemctl is-active docker": exit status 1 (333.571414ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 ssh "sudo systemctl is-active containerd": exit status 1 (330.677475ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-764481 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-764481 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-764481 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 50498: os: process already finished
helpers_test.go:519: unable to terminate pid 50051: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-764481 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-764481 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-764481 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6b8db5e9-098e-47c5-8249-ea9db0a55f56] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [6b8db5e9-098e-47c5-8249-ea9db0a55f56] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.003911285s
I1201 19:14:20.210437   16873 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-764481 /tmp/TestFunctionalparallelMountCmdany-port1543857991/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764616446639680407" to /tmp/TestFunctionalparallelMountCmdany-port1543857991/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764616446639680407" to /tmp/TestFunctionalparallelMountCmdany-port1543857991/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764616446639680407" to /tmp/TestFunctionalparallelMountCmdany-port1543857991/001/test-1764616446639680407
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (322.808214ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 19:14:06.962788   16873 retry.go:31] will retry after 727.252453ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  1 19:14 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  1 19:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  1 19:14 test-1764616446639680407
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh cat /mount-9p/test-1764616446639680407
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-764481 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [0ae0eda3-c805-4c58-b03b-51d2ba0bc4c9] Pending
helpers_test.go:352: "busybox-mount" [0ae0eda3-c805-4c58-b03b-51d2ba0bc4c9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [0ae0eda3-c805-4c58-b03b-51d2ba0bc4c9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [0ae0eda3-c805-4c58-b03b-51d2ba0bc4c9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003182141s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-764481 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-764481 /tmp/TestFunctionalparallelMountCmdany-port1543857991/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-764481 /tmp/TestFunctionalparallelMountCmdspecific-port4173328378/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.704069ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 19:14:19.093158   16873 retry.go:31] will retry after 686.23059ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-764481 /tmp/TestFunctionalparallelMountCmdspecific-port4173328378/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 ssh "sudo umount -f /mount-9p": exit status 1 (288.158964ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-764481 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-764481 /tmp/TestFunctionalparallelMountCmdspecific-port4173328378/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-764481 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.153.40 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-764481 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-764481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2977143097/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-764481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2977143097/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-764481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2977143097/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 ssh "findmnt -T" /mount1: exit status 1 (332.246617ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 19:14:21.172581   16873 retry.go:31] will retry after 384.537564ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-764481 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-764481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2977143097/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-764481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2977143097/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-764481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2977143097/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
E1201 19:14:23.241006   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1330: Took "346.641512ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.424471ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "339.686766ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "56.767928ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-764481 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-764481 image ls --format short --alsologtostderr:
I1201 19:14:32.904273   56859 out.go:360] Setting OutFile to fd 1 ...
I1201 19:14:32.904430   56859 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:14:32.904442   56859 out.go:374] Setting ErrFile to fd 2...
I1201 19:14:32.904449   56859 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:14:32.904650   56859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
I1201 19:14:32.905195   56859 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:14:32.905306   56859 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:14:32.905706   56859 cli_runner.go:164] Run: docker container inspect functional-764481 --format={{.State.Status}}
I1201 19:14:32.924024   56859 ssh_runner.go:195] Run: systemctl --version
I1201 19:14:32.924079   56859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-764481
I1201 19:14:32.941212   56859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/functional-764481/id_rsa Username:docker}
I1201 19:14:33.039571   56859 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image ls --format table --alsologtostderr
2025/12/01 19:14:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-764481 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-764481  │ b3dfca4d29b24 │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-764481 image ls --format table --alsologtostderr:
I1201 19:14:37.439230   57632 out.go:360] Setting OutFile to fd 1 ...
I1201 19:14:37.439470   57632 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:14:37.439480   57632 out.go:374] Setting ErrFile to fd 2...
I1201 19:14:37.439484   57632 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:14:37.439671   57632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
I1201 19:14:37.440187   57632 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:14:37.440275   57632 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:14:37.440671   57632 cli_runner.go:164] Run: docker container inspect functional-764481 --format={{.State.Status}}
I1201 19:14:37.458095   57632 ssh_runner.go:195] Run: systemctl --version
I1201 19:14:37.458146   57632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-764481
I1201 19:14:37.474887   57632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/functional-764481/id_rsa Username:docker}
I1201 19:14:37.573391   57632 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-764481 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f
5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0d
c00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/libra
ry/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":
"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"
id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"b3dfca4d29b24c8c5be9181387ea3c8393005d6773ac646f936622d37371d4ed","repoDigests":["localhost/my-image@sha256:7619fcc770387504480320461f319232705a636d10300f4edd882dc37480c6b4"],"repoTags":["localhost/my-image:functional-764481"],"size":"1468744"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df
59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"0386eb5a42189a12ed5dbce040306bb1d192046f42215a755d324e92c58b82d5","repoDigests":["docker.io/library/88a01b35eba2d7038596fcc993347fcca944257127b05069de4804c4308f9706-tmp@sha256:4a7bf327d5861b388eb90ff0621d17ef377ae9e6504d5a94e964534eb64687cd"],"repoTags":[],"size":"1466132"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-764481 image ls --format json --alsologtostderr:
I1201 19:14:37.217916   57575 out.go:360] Setting OutFile to fd 1 ...
I1201 19:14:37.218018   57575 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:14:37.218026   57575 out.go:374] Setting ErrFile to fd 2...
I1201 19:14:37.218030   57575 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:14:37.218220   57575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
I1201 19:14:37.218757   57575 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:14:37.218863   57575 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:14:37.219406   57575 cli_runner.go:164] Run: docker container inspect functional-764481 --format={{.State.Status}}
I1201 19:14:37.237826   57575 ssh_runner.go:195] Run: systemctl --version
I1201 19:14:37.237886   57575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-764481
I1201 19:14:37.255062   57575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/functional-764481/id_rsa Username:docker}
I1201 19:14:37.352761   57575 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-764481 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-764481 image ls --format yaml --alsologtostderr:
I1201 19:14:33.126354   56927 out.go:360] Setting OutFile to fd 1 ...
I1201 19:14:33.126625   56927 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:14:33.126636   56927 out.go:374] Setting ErrFile to fd 2...
I1201 19:14:33.126642   56927 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:14:33.126884   56927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
I1201 19:14:33.127455   56927 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:14:33.127571   56927 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:14:33.127995   56927 cli_runner.go:164] Run: docker container inspect functional-764481 --format={{.State.Status}}
I1201 19:14:33.146870   56927 ssh_runner.go:195] Run: systemctl --version
I1201 19:14:33.146936   56927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-764481
I1201 19:14:33.167378   56927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/functional-764481/id_rsa Username:docker}
I1201 19:14:33.273977   56927 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764481 ssh pgrep buildkitd: exit status 1 (301.815423ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image build -t localhost/my-image:functional-764481 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-764481 image build -t localhost/my-image:functional-764481 testdata/build --alsologtostderr: (3.324860161s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-764481 image build -t localhost/my-image:functional-764481 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0386eb5a421
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-764481
--> b3dfca4d29b
Successfully tagged localhost/my-image:functional-764481
b3dfca4d29b24c8c5be9181387ea3c8393005d6773ac646f936622d37371d4ed
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-764481 image build -t localhost/my-image:functional-764481 testdata/build --alsologtostderr:
I1201 19:14:33.670217   57095 out.go:360] Setting OutFile to fd 1 ...
I1201 19:14:33.670356   57095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:14:33.670366   57095 out.go:374] Setting ErrFile to fd 2...
I1201 19:14:33.670370   57095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:14:33.670559   57095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
I1201 19:14:33.671131   57095 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:14:33.671712   57095 config.go:182] Loaded profile config "functional-764481": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:14:33.672133   57095 cli_runner.go:164] Run: docker container inspect functional-764481 --format={{.State.Status}}
I1201 19:14:33.691712   57095 ssh_runner.go:195] Run: systemctl --version
I1201 19:14:33.691778   57095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-764481
I1201 19:14:33.712250   57095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/functional-764481/id_rsa Username:docker}
I1201 19:14:33.812475   57095 build_images.go:162] Building image from path: /tmp/build.398028224.tar
I1201 19:14:33.812536   57095 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1201 19:14:33.820816   57095 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.398028224.tar
I1201 19:14:33.824958   57095 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.398028224.tar: stat -c "%s %y" /var/lib/minikube/build/build.398028224.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.398028224.tar': No such file or directory
I1201 19:14:33.824981   57095 ssh_runner.go:362] scp /tmp/build.398028224.tar --> /var/lib/minikube/build/build.398028224.tar (3072 bytes)
I1201 19:14:33.843001   57095 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.398028224
I1201 19:14:33.852318   57095 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.398028224 -xf /var/lib/minikube/build/build.398028224.tar
I1201 19:14:33.861959   57095 crio.go:315] Building image: /var/lib/minikube/build/build.398028224
I1201 19:14:33.862025   57095 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-764481 /var/lib/minikube/build/build.398028224 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1201 19:14:36.915744   57095 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-764481 /var/lib/minikube/build/build.398028224 --cgroup-manager=cgroupfs: (3.053679209s)
I1201 19:14:36.915815   57095 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.398028224
I1201 19:14:36.923652   57095 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.398028224.tar
I1201 19:14:36.931051   57095 build_images.go:218] Built localhost/my-image:functional-764481 from /tmp/build.398028224.tar
I1201 19:14:36.931083   57095 build_images.go:134] succeeded building to: functional-764481
I1201 19:14:36.931090   57095 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-764481
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image rm kicbase/echo-server:functional-764481 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 version -o=json --components
E1201 19:15:45.163004   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:18:01.298663   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:18:29.004530   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:23:01.299230   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-764481 service list: (1.723664696s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-764481 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-764481 service list -o json: (1.69790752s)
functional_test.go:1504: Took "1.697991706s" to run "out/minikube-linux-amd64 -p functional-764481 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-764481
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-764481
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-764481
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-13091/.minikube/files/etc/test/nested/copy/16873/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (43.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-415638 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-415638 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (43.135291351s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (43.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1201 19:25:16.605478   16873 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-415638 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-415638 --alsologtostderr -v=8: (6.330716166s)
functional_test.go:678: soft start took 6.331169279s for "functional-415638" cluster.
I1201 19:25:22.937032   16873 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-415638 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-415638 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach606435400/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 cache add minikube-local-cache-test:functional-415638
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 cache delete minikube-local-cache-test:functional-415638
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-415638
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (281.373077ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 kubectl -- --context functional-415638 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-415638 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (59.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-415638 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-415638 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (59.47808458s)
functional_test.go:776: restart took 59.478205405s for "functional-415638" cluster.
I1201 19:26:28.194825   16873 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (59.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-415638 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-415638 logs: (1.226386359s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3894812844/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-415638 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3894812844/001/logs.txt: (1.256944409s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (8.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-415638 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-415638
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-415638: exit status 115 (341.040311ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32619 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-415638 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-415638 delete -f testdata/invalidsvc.yaml: (4.835329401s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (8.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 config get cpus: exit status 14 (90.375477ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 config get cpus: exit status 14 (83.787106ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (7.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-415638 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-415638 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 78817: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (7.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-415638 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-415638 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (166.35042ms)

                                                
                                                
-- stdout --
	* [functional-415638] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:26:41.990984   74922 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:26:41.991118   74922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:26:41.991129   74922 out.go:374] Setting ErrFile to fd 2...
	I1201 19:26:41.991136   74922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:26:41.991382   74922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:26:41.991834   74922 out.go:368] Setting JSON to false
	I1201 19:26:41.992773   74922 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4153,"bootTime":1764613049,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:26:41.992837   74922 start.go:143] virtualization: kvm guest
	I1201 19:26:41.994769   74922 out.go:179] * [functional-415638] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:26:41.996137   74922 notify.go:221] Checking for updates...
	I1201 19:26:41.996161   74922 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:26:41.997673   74922 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:26:41.998873   74922 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:26:42.000046   74922 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 19:26:42.001224   74922 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:26:42.002413   74922 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:26:42.004000   74922 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 19:26:42.004731   74922 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:26:42.028434   74922 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 19:26:42.028575   74922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:26:42.086875   74922 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-01 19:26:42.076569289 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:26:42.086995   74922 docker.go:319] overlay module found
	I1201 19:26:42.088651   74922 out.go:179] * Using the docker driver based on existing profile
	I1201 19:26:42.089730   74922 start.go:309] selected driver: docker
	I1201 19:26:42.089743   74922 start.go:927] validating driver "docker" against &{Name:functional-415638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-415638 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:26:42.089839   74922 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:26:42.091550   74922 out.go:203] 
	W1201 19:26:42.092606   74922 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1201 19:26:42.093608   74922 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-415638 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-415638 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-415638 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (188.280278ms)

                                                
                                                
-- stdout --
	* [functional-415638] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:26:39.628480   72983 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:26:39.628754   72983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:26:39.628766   72983 out.go:374] Setting ErrFile to fd 2...
	I1201 19:26:39.628773   72983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:26:39.629141   72983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:26:39.629579   72983 out.go:368] Setting JSON to false
	I1201 19:26:39.630495   72983 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4151,"bootTime":1764613049,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:26:39.630591   72983 start.go:143] virtualization: kvm guest
	I1201 19:26:39.632944   72983 out.go:179] * [functional-415638] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1201 19:26:39.634263   72983 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:26:39.634318   72983 notify.go:221] Checking for updates...
	I1201 19:26:39.640540   72983 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:26:39.641674   72983 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 19:26:39.643044   72983 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 19:26:39.644191   72983 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:26:39.645230   72983 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:26:39.646715   72983 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 19:26:39.647554   72983 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:26:39.674112   72983 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 19:26:39.674213   72983 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:26:39.734416   72983 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-01 19:26:39.723823825 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:26:39.734570   72983 docker.go:319] overlay module found
	I1201 19:26:39.736250   72983 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1201 19:26:39.737549   72983 start.go:309] selected driver: docker
	I1201 19:26:39.737576   72983 start.go:927] validating driver "docker" against &{Name:functional-415638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-415638 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:26:39.737669   72983 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:26:39.739523   72983 out.go:203] 
	W1201 19:26:39.740586   72983 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1201 19:26:39.741884   72983 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (25.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [99872346-dbd6-470d-930d-31ca0c6b94de] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00384125s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-415638 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-415638 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-415638 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-415638 apply -f testdata/storage-provisioner/pod.yaml
I1201 19:26:47.677769   16873 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [669ff2f7-480b-4e05-8762-6b30fb604ff4] Pending
helpers_test.go:352: "sp-pod" [669ff2f7-480b-4e05-8762-6b30fb604ff4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [669ff2f7-480b-4e05-8762-6b30fb604ff4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.002490428s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-415638 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-415638 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-415638 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [309c0c03-8283-449b-abed-44de378c827e] Pending
helpers_test.go:352: "sp-pod" [309c0c03-8283-449b-abed-44de378c827e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [309c0c03-8283-449b-abed-44de378c827e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003703478s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-415638 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (25.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh -n functional-415638 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 cp functional-415638:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp4150774717/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh -n functional-415638 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh -n functional-415638 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (15.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-415638 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-5tdph" [6a03eaeb-851e-4510-ae19-04c8f0d796c6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-5tdph" [6a03eaeb-851e-4510-ae19-04c8f0d796c6] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 13.003765893s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-415638 exec mysql-844cf969f6-5tdph -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-415638 exec mysql-844cf969f6-5tdph -- mysql -ppassword -e "show databases;": exit status 1 (84.834973ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1201 19:27:13.655837   16873 retry.go:31] will retry after 1.326817936s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-415638 exec mysql-844cf969f6-5tdph -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-415638 exec mysql-844cf969f6-5tdph -- mysql -ppassword -e "show databases;": exit status 1 (109.333199ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1201 19:27:15.092874   16873 retry.go:31] will retry after 758.049144ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-415638 exec mysql-844cf969f6-5tdph -- mysql -ppassword -e "show databases;"
E1201 19:28:01.298404   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:04.745956   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:04.752347   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:04.763660   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:04.785037   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:04.826469   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:04.907950   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:05.069481   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:05.391445   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:06.033268   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:07.315351   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:09.877209   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:14.999121   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:24.366943   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:25.241059   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:45.723225   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:30:26.684687   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:31:48.606912   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:33:01.298573   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:34:04.745858   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:34:32.449253   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (15.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/16873/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "sudo cat /etc/test/nested/copy/16873/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/16873.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "sudo cat /etc/ssl/certs/16873.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/16873.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "sudo cat /usr/share/ca-certificates/16873.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/168732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "sudo cat /etc/ssl/certs/168732.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/168732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "sudo cat /usr/share/ca-certificates/168732.pem"
I1201 19:26:59.586689   16873 detect.go:223] nested VM detected
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-415638 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 ssh "sudo systemctl is-active docker": exit status 1 (302.923606ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 ssh "sudo systemctl is-active containerd": exit status 1 (300.093536ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "417.037124ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "60.12353ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-415638 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1262023812/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764617199748428664" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1262023812/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764617199748428664" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1262023812/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764617199748428664" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1262023812/001/test-1764617199748428664
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (336.957305ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 19:26:40.085748   16873 retry.go:31] will retry after 686.816467ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  1 19:26 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  1 19:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  1 19:26 test-1764617199748428664
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh cat /mount-9p/test-1764617199748428664
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-415638 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [102a4bf6-48b9-4190-9faf-056dbe9e1c33] Pending
helpers_test.go:352: "busybox-mount" [102a4bf6-48b9-4190-9faf-056dbe9e1c33] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [102a4bf6-48b9-4190-9faf-056dbe9e1c33] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [102a4bf6-48b9-4190-9faf-056dbe9e1c33] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003240133s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-415638 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-415638 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1262023812/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "371.224266ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "69.314876ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-415638 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-415638 image ls --format short --alsologtostderr:
I1201 19:27:08.268411   80605 out.go:360] Setting OutFile to fd 1 ...
I1201 19:27:08.268503   80605 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:27:08.268507   80605 out.go:374] Setting ErrFile to fd 2...
I1201 19:27:08.268511   80605 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:27:08.268711   80605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
I1201 19:27:08.269653   80605 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:27:08.269855   80605 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:27:08.270780   80605 cli_runner.go:164] Run: docker container inspect functional-415638 --format={{.State.Status}}
I1201 19:27:08.289213   80605 ssh_runner.go:195] Run: systemctl --version
I1201 19:27:08.289252   80605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-415638
I1201 19:27:08.308767   80605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/functional-415638/id_rsa Username:docker}
I1201 19:27:08.407069   80605 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-415638 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 740kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ localhost/my-image                      │ functional-415638  │ 1e26aad8c5e80 │ 1.47MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-415638 image ls --format table --alsologtostderr:
I1201 19:27:11.684196   81355 out.go:360] Setting OutFile to fd 1 ...
I1201 19:27:11.684527   81355 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:27:11.684541   81355 out.go:374] Setting ErrFile to fd 2...
I1201 19:27:11.684548   81355 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:27:11.684832   81355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
I1201 19:27:11.685713   81355 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:27:11.685881   81355 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:27:11.686547   81355 cli_runner.go:164] Run: docker container inspect functional-415638 --format={{.State.Status}}
I1201 19:27:11.704022   81355 ssh_runner.go:195] Run: systemctl --version
I1201 19:27:11.704065   81355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-415638
I1201 19:27:11.721353   81355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/functional-415638/id_rsa Username:docker}
I1201 19:27:11.820262   81355 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-415638 image ls --format json --alsologtostderr:
[{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"739536"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"1e26aad8c5e80fcada16db375b54e2458d3395a5845537961a94f742dde7f56e","repoDigests":["localhost/my-image@sha256:99354fb9835301df01e2f34973ee65c38aa6c867a45c53047c302f032012ea93"],"repoTags":["localhost/my-image:functional-415638"],"size":"1468744"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c
4dc0ed5a6d10"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63582165"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90816810"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76869776"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5560617f376966f9cc01d91a20dd765669775640f2733e71dec9b3fce67d1338","repoDigests":["docker.io/library/cfb56cb88b7f15b84646db620a0bd7
bf3efedeec0acb032dfabfb321e5d82428-tmp@sha256:4d4eef73364b2b4f52eeb67280a42d973ad4475f61796e3133d9ce76bd576bf2"],"repoTags":[],"size":"1466132"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad7
3bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31468661"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3"],"repoTags":["registry.k8s.io/kube
-scheduler:v1.35.0-beta.0"],"size":"52744336"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"5
6cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79190589"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71976228"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae
94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-415638 image ls --format json --alsologtostderr:
I1201 19:27:11.451679   81299 out.go:360] Setting OutFile to fd 1 ...
I1201 19:27:11.451786   81299 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:27:11.451793   81299 out.go:374] Setting ErrFile to fd 2...
I1201 19:27:11.451800   81299 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:27:11.452003   81299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
I1201 19:27:11.452573   81299 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:27:11.452694   81299 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:27:11.453129   81299 cli_runner.go:164] Run: docker container inspect functional-415638 --format={{.State.Status}}
I1201 19:27:11.472174   81299 ssh_runner.go:195] Run: systemctl --version
I1201 19:27:11.472251   81299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-415638
I1201 19:27:11.490451   81299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/functional-415638/id_rsa Username:docker}
I1201 19:27:11.589152   81299 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-415638 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31468661"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90816810"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71976228"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63582165"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76869776"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52744336"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79190589"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b
repoTags:
- registry.k8s.io/pause:3.10.1
size: "739536"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-415638 image ls --format yaml --alsologtostderr:
I1201 19:27:08.499249   80662 out.go:360] Setting OutFile to fd 1 ...
I1201 19:27:08.499542   80662 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:27:08.499552   80662 out.go:374] Setting ErrFile to fd 2...
I1201 19:27:08.499558   80662 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:27:08.499749   80662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
I1201 19:27:08.500346   80662 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:27:08.500462   80662 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:27:08.500906   80662 cli_runner.go:164] Run: docker container inspect functional-415638 --format={{.State.Status}}
I1201 19:27:08.519488   80662 ssh_runner.go:195] Run: systemctl --version
I1201 19:27:08.519534   80662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-415638
I1201 19:27:08.537355   80662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/functional-415638/id_rsa Username:docker}
I1201 19:27:08.635834   80662 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 ssh pgrep buildkitd: exit status 1 (267.701706ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image build -t localhost/my-image:functional-415638 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-415638 image build -t localhost/my-image:functional-415638 testdata/build --alsologtostderr: (2.209245663s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-415638 image build -t localhost/my-image:functional-415638 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5560617f376
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-415638
--> 1e26aad8c5e
Successfully tagged localhost/my-image:functional-415638
1e26aad8c5e80fcada16db375b54e2458d3395a5845537961a94f742dde7f56e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-415638 image build -t localhost/my-image:functional-415638 testdata/build --alsologtostderr:
I1201 19:27:08.991167   80824 out.go:360] Setting OutFile to fd 1 ...
I1201 19:27:08.991266   80824 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:27:08.991273   80824 out.go:374] Setting ErrFile to fd 2...
I1201 19:27:08.991277   80824 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:27:08.991493   80824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
I1201 19:27:08.992033   80824 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:27:08.992697   80824 config.go:182] Loaded profile config "functional-415638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:27:08.993145   80824 cli_runner.go:164] Run: docker container inspect functional-415638 --format={{.State.Status}}
I1201 19:27:09.014677   80824 ssh_runner.go:195] Run: systemctl --version
I1201 19:27:09.014729   80824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-415638
I1201 19:27:09.032646   80824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/functional-415638/id_rsa Username:docker}
I1201 19:27:09.129911   80824 build_images.go:162] Building image from path: /tmp/build.368013910.tar
I1201 19:27:09.129983   80824 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1201 19:27:09.138048   80824 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.368013910.tar
I1201 19:27:09.141752   80824 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.368013910.tar: stat -c "%s %y" /var/lib/minikube/build/build.368013910.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.368013910.tar': No such file or directory
I1201 19:27:09.141781   80824 ssh_runner.go:362] scp /tmp/build.368013910.tar --> /var/lib/minikube/build/build.368013910.tar (3072 bytes)
I1201 19:27:09.159421   80824 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.368013910
I1201 19:27:09.167442   80824 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.368013910 -xf /var/lib/minikube/build/build.368013910.tar
I1201 19:27:09.175469   80824 crio.go:315] Building image: /var/lib/minikube/build/build.368013910
I1201 19:27:09.175531   80824 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-415638 /var/lib/minikube/build/build.368013910 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1201 19:27:11.122962   80824 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-415638 /var/lib/minikube/build/build.368013910 --cgroup-manager=cgroupfs: (1.947401249s)
I1201 19:27:11.123013   80824 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.368013910
I1201 19:27:11.131577   80824 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.368013910.tar
I1201 19:27:11.139383   80824 build_images.go:218] Built localhost/my-image:functional-415638 from /tmp/build.368013910.tar
I1201 19:27:11.139413   80824 build_images.go:134] succeeded building to: functional-415638
I1201 19:27:11.139418   80824 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-415638
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image rm kicbase/echo-server:functional-415638 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-415638 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-415638 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-415638 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 76762: os: process already finished
helpers_test.go:519: unable to terminate pid 76483: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-415638 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-415638 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4281488406/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (329.150233ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 19:26:47.273888   16873 retry.go:31] will retry after 638.702718ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-415638 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4281488406/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 ssh "sudo umount -f /mount-9p": exit status 1 (320.699341ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-415638 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-415638 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4281488406/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-415638 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (8.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-415638 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [46250fd7-43fa-4ad8-8fd2-0c402abf675e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [46250fd7-43fa-4ad8-8fd2-0c402abf675e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003299469s
I1201 19:26:55.206307   16873 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (8.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-415638 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2557591703/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-415638 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2557591703/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-415638 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2557591703/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-415638 ssh "findmnt -T" /mount1: exit status 1 (381.831818ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 19:26:49.455764   16873 retry.go:31] will retry after 459.131384ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-415638 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-415638 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2557591703/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-415638 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2557591703/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-415638 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2557591703/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-415638 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.188.206 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-415638 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-415638 service list: (1.700223335s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-415638 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-415638 service list -o json: (1.699526047s)
functional_test.go:1504: Took "1.699609832s" to run "out/minikube-linux-amd64 -p functional-415638 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-415638
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-415638
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-415638
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (155.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1201 19:38:01.299322   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:39:04.745677   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-707568 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m34.547237587s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (155.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-707568 kubectl -- rollout status deployment/busybox: (2.593681313s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-2fzn7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-f9jg8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-qv8xx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-2fzn7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-f9jg8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-qv8xx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-2fzn7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-f9jg8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-qv8xx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-2fzn7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-2fzn7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-f9jg8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-f9jg8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-qv8xx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 kubectl -- exec busybox-7b57f96db7-qv8xx -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-707568 node add --alsologtostderr -v 5: (23.084932833s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-707568 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp testdata/cp-test.txt ha-707568:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1305123344/001/cp-test_ha-707568.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568:/home/docker/cp-test.txt ha-707568-m02:/home/docker/cp-test_ha-707568_ha-707568-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m02 "sudo cat /home/docker/cp-test_ha-707568_ha-707568-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568:/home/docker/cp-test.txt ha-707568-m03:/home/docker/cp-test_ha-707568_ha-707568-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m03 "sudo cat /home/docker/cp-test_ha-707568_ha-707568-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568:/home/docker/cp-test.txt ha-707568-m04:/home/docker/cp-test_ha-707568_ha-707568-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m04 "sudo cat /home/docker/cp-test_ha-707568_ha-707568-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp testdata/cp-test.txt ha-707568-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1305123344/001/cp-test_ha-707568-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568-m02:/home/docker/cp-test.txt ha-707568:/home/docker/cp-test_ha-707568-m02_ha-707568.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568 "sudo cat /home/docker/cp-test_ha-707568-m02_ha-707568.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568-m02:/home/docker/cp-test.txt ha-707568-m03:/home/docker/cp-test_ha-707568-m02_ha-707568-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m03 "sudo cat /home/docker/cp-test_ha-707568-m02_ha-707568-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568-m02:/home/docker/cp-test.txt ha-707568-m04:/home/docker/cp-test_ha-707568-m02_ha-707568-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m04 "sudo cat /home/docker/cp-test_ha-707568-m02_ha-707568-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp testdata/cp-test.txt ha-707568-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1305123344/001/cp-test_ha-707568-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568-m03:/home/docker/cp-test.txt ha-707568:/home/docker/cp-test_ha-707568-m03_ha-707568.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568 "sudo cat /home/docker/cp-test_ha-707568-m03_ha-707568.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568-m03:/home/docker/cp-test.txt ha-707568-m02:/home/docker/cp-test_ha-707568-m03_ha-707568-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m02 "sudo cat /home/docker/cp-test_ha-707568-m03_ha-707568-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568-m03:/home/docker/cp-test.txt ha-707568-m04:/home/docker/cp-test_ha-707568-m03_ha-707568-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m04 "sudo cat /home/docker/cp-test_ha-707568-m03_ha-707568-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp testdata/cp-test.txt ha-707568-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1305123344/001/cp-test_ha-707568-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568-m04:/home/docker/cp-test.txt ha-707568:/home/docker/cp-test_ha-707568-m04_ha-707568.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568 "sudo cat /home/docker/cp-test_ha-707568-m04_ha-707568.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568-m04:/home/docker/cp-test.txt ha-707568-m02:/home/docker/cp-test_ha-707568-m04_ha-707568-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m02 "sudo cat /home/docker/cp-test_ha-707568-m04_ha-707568-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 cp ha-707568-m04:/home/docker/cp-test.txt ha-707568-m03:/home/docker/cp-test_ha-707568-m04_ha-707568-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 ssh -n ha-707568-m03 "sudo cat /home/docker/cp-test_ha-707568-m04_ha-707568-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-707568 node stop m02 --alsologtostderr -v 5: (19.093308549s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-707568 status --alsologtostderr -v 5: exit status 7 (694.728588ms)

                                                
                                                
-- stdout --
	ha-707568
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-707568-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-707568-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-707568-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:40:43.427654  105665 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:40:43.427891  105665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:40:43.427898  105665 out.go:374] Setting ErrFile to fd 2...
	I1201 19:40:43.427903  105665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:40:43.428085  105665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:40:43.428247  105665 out.go:368] Setting JSON to false
	I1201 19:40:43.428268  105665 mustload.go:66] Loading cluster: ha-707568
	I1201 19:40:43.428341  105665 notify.go:221] Checking for updates...
	I1201 19:40:43.428639  105665 config.go:182] Loaded profile config "ha-707568": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:40:43.428653  105665 status.go:174] checking status of ha-707568 ...
	I1201 19:40:43.429063  105665 cli_runner.go:164] Run: docker container inspect ha-707568 --format={{.State.Status}}
	I1201 19:40:43.450488  105665 status.go:371] ha-707568 host status = "Running" (err=<nil>)
	I1201 19:40:43.450522  105665 host.go:66] Checking if "ha-707568" exists ...
	I1201 19:40:43.450809  105665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-707568
	I1201 19:40:43.470374  105665 host.go:66] Checking if "ha-707568" exists ...
	I1201 19:40:43.470593  105665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 19:40:43.470629  105665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-707568
	I1201 19:40:43.488684  105665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/ha-707568/id_rsa Username:docker}
	I1201 19:40:43.585006  105665 ssh_runner.go:195] Run: systemctl --version
	I1201 19:40:43.591231  105665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:40:43.604254  105665 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:40:43.662537  105665 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-01 19:40:43.653182976 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:40:43.662998  105665 kubeconfig.go:125] found "ha-707568" server: "https://192.168.49.254:8443"
	I1201 19:40:43.663029  105665 api_server.go:166] Checking apiserver status ...
	I1201 19:40:43.663074  105665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 19:40:43.674850  105665 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1238/cgroup
	W1201 19:40:43.683001  105665 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1238/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1201 19:40:43.683041  105665 ssh_runner.go:195] Run: ls
	I1201 19:40:43.686571  105665 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1201 19:40:43.690549  105665 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1201 19:40:43.690569  105665 status.go:463] ha-707568 apiserver status = Running (err=<nil>)
	I1201 19:40:43.690577  105665 status.go:176] ha-707568 status: &{Name:ha-707568 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:40:43.690597  105665 status.go:174] checking status of ha-707568-m02 ...
	I1201 19:40:43.690820  105665 cli_runner.go:164] Run: docker container inspect ha-707568-m02 --format={{.State.Status}}
	I1201 19:40:43.709672  105665 status.go:371] ha-707568-m02 host status = "Stopped" (err=<nil>)
	I1201 19:40:43.709695  105665 status.go:384] host is not running, skipping remaining checks
	I1201 19:40:43.709702  105665 status.go:176] ha-707568-m02 status: &{Name:ha-707568-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:40:43.709718  105665 status.go:174] checking status of ha-707568-m03 ...
	I1201 19:40:43.709940  105665 cli_runner.go:164] Run: docker container inspect ha-707568-m03 --format={{.State.Status}}
	I1201 19:40:43.726931  105665 status.go:371] ha-707568-m03 host status = "Running" (err=<nil>)
	I1201 19:40:43.726950  105665 host.go:66] Checking if "ha-707568-m03" exists ...
	I1201 19:40:43.727228  105665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-707568-m03
	I1201 19:40:43.744839  105665 host.go:66] Checking if "ha-707568-m03" exists ...
	I1201 19:40:43.745068  105665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 19:40:43.745107  105665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-707568-m03
	I1201 19:40:43.761905  105665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/ha-707568-m03/id_rsa Username:docker}
	I1201 19:40:43.859807  105665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:40:43.872387  105665 kubeconfig.go:125] found "ha-707568" server: "https://192.168.49.254:8443"
	I1201 19:40:43.872410  105665 api_server.go:166] Checking apiserver status ...
	I1201 19:40:43.872449  105665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 19:40:43.883091  105665 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup
	W1201 19:40:43.891106  105665 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1201 19:40:43.891160  105665 ssh_runner.go:195] Run: ls
	I1201 19:40:43.894658  105665 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1201 19:40:43.900133  105665 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1201 19:40:43.900156  105665 status.go:463] ha-707568-m03 apiserver status = Running (err=<nil>)
	I1201 19:40:43.900166  105665 status.go:176] ha-707568-m03 status: &{Name:ha-707568-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:40:43.900185  105665 status.go:174] checking status of ha-707568-m04 ...
	I1201 19:40:43.900451  105665 cli_runner.go:164] Run: docker container inspect ha-707568-m04 --format={{.State.Status}}
	I1201 19:40:43.919300  105665 status.go:371] ha-707568-m04 host status = "Running" (err=<nil>)
	I1201 19:40:43.919334  105665 host.go:66] Checking if "ha-707568-m04" exists ...
	I1201 19:40:43.919581  105665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-707568-m04
	I1201 19:40:43.936526  105665 host.go:66] Checking if "ha-707568-m04" exists ...
	I1201 19:40:43.936830  105665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 19:40:43.936880  105665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-707568-m04
	I1201 19:40:43.954815  105665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/ha-707568-m04/id_rsa Username:docker}
	I1201 19:40:44.050358  105665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:40:44.062563  105665 status.go:176] ha-707568-m04 status: &{Name:ha-707568-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-707568 node start m02 --alsologtostderr -v 5: (13.621746195s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (104.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 stop --alsologtostderr -v 5
E1201 19:41:39.278991   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:41:39.285381   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:41:39.296742   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:41:39.318143   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:41:39.359655   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:41:39.441144   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:41:39.602766   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:41:39.924546   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:41:40.566602   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:41:41.848388   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:41:44.411343   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-707568 stop --alsologtostderr -v 5: (45.902613242s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 start --wait true --alsologtostderr -v 5
E1201 19:41:49.533496   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:41:59.775146   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:42:20.257169   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-707568 start --wait true --alsologtostderr -v 5: (58.94335089s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (104.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-707568 node delete m03 --alsologtostderr -v 5: (9.720151792s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (47.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 stop --alsologtostderr -v 5
E1201 19:43:01.218874   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:43:01.298635   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-707568 stop --alsologtostderr -v 5: (46.989787291s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-707568 status --alsologtostderr -v 5: exit status 7 (113.134069ms)

                                                
                                                
-- stdout --
	ha-707568
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-707568-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-707568-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:43:43.462015  120055 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:43:43.462314  120055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:43:43.462323  120055 out.go:374] Setting ErrFile to fd 2...
	I1201 19:43:43.462327  120055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:43:43.462562  120055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:43:43.462774  120055 out.go:368] Setting JSON to false
	I1201 19:43:43.462798  120055 mustload.go:66] Loading cluster: ha-707568
	I1201 19:43:43.462920  120055 notify.go:221] Checking for updates...
	I1201 19:43:43.463280  120055 config.go:182] Loaded profile config "ha-707568": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:43:43.463306  120055 status.go:174] checking status of ha-707568 ...
	I1201 19:43:43.463720  120055 cli_runner.go:164] Run: docker container inspect ha-707568 --format={{.State.Status}}
	I1201 19:43:43.482805  120055 status.go:371] ha-707568 host status = "Stopped" (err=<nil>)
	I1201 19:43:43.482824  120055 status.go:384] host is not running, skipping remaining checks
	I1201 19:43:43.482830  120055 status.go:176] ha-707568 status: &{Name:ha-707568 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:43:43.482855  120055 status.go:174] checking status of ha-707568-m02 ...
	I1201 19:43:43.483080  120055 cli_runner.go:164] Run: docker container inspect ha-707568-m02 --format={{.State.Status}}
	I1201 19:43:43.500736  120055 status.go:371] ha-707568-m02 host status = "Stopped" (err=<nil>)
	I1201 19:43:43.500770  120055 status.go:384] host is not running, skipping remaining checks
	I1201 19:43:43.500780  120055 status.go:176] ha-707568-m02 status: &{Name:ha-707568-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:43:43.500802  120055 status.go:174] checking status of ha-707568-m04 ...
	I1201 19:43:43.501043  120055 cli_runner.go:164] Run: docker container inspect ha-707568-m04 --format={{.State.Status}}
	I1201 19:43:43.518700  120055 status.go:371] ha-707568-m04 host status = "Stopped" (err=<nil>)
	I1201 19:43:43.518734  120055 status.go:384] host is not running, skipping remaining checks
	I1201 19:43:43.518744  120055 status.go:176] ha-707568-m04 status: &{Name:ha-707568-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (47.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1201 19:44:04.746635   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:44:23.141345   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-707568 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (52.322787453s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 node add --control-plane --alsologtostderr -v 5
E1201 19:45:27.811152   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-707568 node add --control-plane --alsologtostderr -v 5: (1m12.512075929s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-707568 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-779728 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1201 19:46:04.368982   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:46:39.279996   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-779728 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m6.519174917s)
--- PASS: TestJSONOutput/start/Command (66.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-779728 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-779728 --output=json --user=testUser: (8.017136033s)
--- PASS: TestJSONOutput/stop/Command (8.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-387447 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-387447 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.537067ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aebb882f-5714-4f46-b04d-c5c434f00ca7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-387447] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"084d8a24-81a3-4ed4-9f70-6ae51cae2f86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"86201b54-96d7-41b3-93fd-7987e5b50980","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0611f8d8-bc98-461e-b8a8-1038fd2d5f3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig"}}
	{"specversion":"1.0","id":"f8fd2955-9a44-438e-82ce-277e75802be7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube"}}
	{"specversion":"1.0","id":"d70e45ee-4a21-4f24-a30e-be38098db88f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"89af82f0-c802-44fe-b448-12e357113d3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2457618b-f106-4077-b64b-bc5b64966aa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-387447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-387447
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-356059 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-356059 --network=: (23.536737347s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-356059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-356059
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-356059: (2.184730642s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.74s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.97s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-108784 --network=bridge
E1201 19:48:01.299375   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-108784 --network=bridge: (19.978856741s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-108784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-108784
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-108784: (1.970953309s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.97s)

                                                
                                    
x
+
TestKicExistingNetwork (23.17s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1201 19:48:10.100460   16873 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1201 19:48:10.117890   16873 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1201 19:48:10.117951   16873 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1201 19:48:10.117966   16873 cli_runner.go:164] Run: docker network inspect existing-network
W1201 19:48:10.134769   16873 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1201 19:48:10.134815   16873 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1201 19:48:10.134838   16873 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1201 19:48:10.134987   16873 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1201 19:48:10.153417   16873 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-76afd0f6296c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:9d:28:e3:43:67} reservation:<nil>}
I1201 19:48:10.153818   16873 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000118960}
I1201 19:48:10.153848   16873 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1201 19:48:10.153896   16873 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1201 19:48:10.201067   16873 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-092842 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-092842 --network=existing-network: (21.050673274s)
helpers_test.go:175: Cleaning up "existing-network-092842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-092842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-092842: (1.979259839s)
I1201 19:48:33.248755   16873 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.17s)

                                                
                                    
x
+
TestKicCustomSubnet (22.16s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-450277 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-450277 --subnet=192.168.60.0/24: (20.00710512s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-450277 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-450277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-450277
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-450277: (2.131057431s)
--- PASS: TestKicCustomSubnet (22.16s)

                                                
                                    
x
+
TestKicStaticIP (24.59s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-743329 --static-ip=192.168.200.200
E1201 19:49:04.747456   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-743329 --static-ip=192.168.200.200: (22.306177798s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-743329 ip
helpers_test.go:175: Cleaning up "static-ip-743329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-743329
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-743329: (2.1325508s)
--- PASS: TestKicStaticIP (24.59s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.77s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-505359 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-505359 --driver=docker  --container-runtime=crio: (20.236022679s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-507665 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-507665 --driver=docker  --container-runtime=crio: (22.629744723s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-505359
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-507665
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-507665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-507665
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-507665: (2.329441551s)
helpers_test.go:175: Cleaning up "first-505359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-505359
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-505359: (2.341072465s)
--- PASS: TestMinikubeProfile (48.77s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-564263 --memory=3072 --mount-string /tmp/TestMountStartserial2454219033/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-564263 --memory=3072 --mount-string /tmp/TestMountStartserial2454219033/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.728851187s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-564263 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-580497 --memory=3072 --mount-string /tmp/TestMountStartserial2454219033/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-580497 --memory=3072 --mount-string /tmp/TestMountStartserial2454219033/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.892484975s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-580497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-564263 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-564263 --alsologtostderr -v=5: (1.694856642s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-580497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-580497
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-580497: (1.259077344s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.46s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-580497
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-580497: (6.457198002s)
--- PASS: TestMountStart/serial/RestartStopped (7.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-580497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-949956 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-949956 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m7.017933058s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr
E1201 19:51:39.275959   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-949956 -- rollout status deployment/busybox: (1.931437936s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7b57f96db7-6s9kl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7b57f96db7-bf8r5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7b57f96db7-6s9kl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7b57f96db7-bf8r5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7b57f96db7-6s9kl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7b57f96db7-bf8r5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7b57f96db7-6s9kl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7b57f96db7-6s9kl -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7b57f96db7-bf8r5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-949956 -- exec busybox-7b57f96db7-bf8r5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-949956 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-949956 -v=5 --alsologtostderr: (52.357624459s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-949956 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp testdata/cp-test.txt multinode-949956:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2368600804/001/cp-test_multinode-949956.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956:/home/docker/cp-test.txt multinode-949956-m02:/home/docker/cp-test_multinode-949956_multinode-949956-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m02 "sudo cat /home/docker/cp-test_multinode-949956_multinode-949956-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956:/home/docker/cp-test.txt multinode-949956-m03:/home/docker/cp-test_multinode-949956_multinode-949956-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m03 "sudo cat /home/docker/cp-test_multinode-949956_multinode-949956-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp testdata/cp-test.txt multinode-949956-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2368600804/001/cp-test_multinode-949956-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956-m02:/home/docker/cp-test.txt multinode-949956:/home/docker/cp-test_multinode-949956-m02_multinode-949956.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956 "sudo cat /home/docker/cp-test_multinode-949956-m02_multinode-949956.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956-m02:/home/docker/cp-test.txt multinode-949956-m03:/home/docker/cp-test_multinode-949956-m02_multinode-949956-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m03 "sudo cat /home/docker/cp-test_multinode-949956-m02_multinode-949956-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp testdata/cp-test.txt multinode-949956-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2368600804/001/cp-test_multinode-949956-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt multinode-949956:/home/docker/cp-test_multinode-949956-m03_multinode-949956.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956 "sudo cat /home/docker/cp-test_multinode-949956-m03_multinode-949956.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 cp multinode-949956-m03:/home/docker/cp-test.txt multinode-949956-m02:/home/docker/cp-test_multinode-949956-m03_multinode-949956-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 ssh -n multinode-949956-m02 "sudo cat /home/docker/cp-test_multinode-949956-m03_multinode-949956-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-949956 node stop m03: (1.260684724s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-949956 status: exit status 7 (490.920253ms)

                                                
                                                
-- stdout --
	multinode-949956
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-949956-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-949956-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr: exit status 7 (490.84655ms)

                                                
                                                
-- stdout --
	multinode-949956
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-949956-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-949956-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:52:48.743072  179870 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:52:48.743338  179870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:52:48.743347  179870 out.go:374] Setting ErrFile to fd 2...
	I1201 19:52:48.743352  179870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:52:48.743557  179870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:52:48.743712  179870 out.go:368] Setting JSON to false
	I1201 19:52:48.743736  179870 mustload.go:66] Loading cluster: multinode-949956
	I1201 19:52:48.743793  179870 notify.go:221] Checking for updates...
	I1201 19:52:48.744088  179870 config.go:182] Loaded profile config "multinode-949956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:52:48.744102  179870 status.go:174] checking status of multinode-949956 ...
	I1201 19:52:48.744617  179870 cli_runner.go:164] Run: docker container inspect multinode-949956 --format={{.State.Status}}
	I1201 19:52:48.763978  179870 status.go:371] multinode-949956 host status = "Running" (err=<nil>)
	I1201 19:52:48.763998  179870 host.go:66] Checking if "multinode-949956" exists ...
	I1201 19:52:48.764235  179870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-949956
	I1201 19:52:48.782412  179870 host.go:66] Checking if "multinode-949956" exists ...
	I1201 19:52:48.782648  179870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 19:52:48.782715  179870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-949956
	I1201 19:52:48.800091  179870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/multinode-949956/id_rsa Username:docker}
	I1201 19:52:48.895494  179870 ssh_runner.go:195] Run: systemctl --version
	I1201 19:52:48.901658  179870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:52:48.913400  179870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 19:52:48.968180  179870 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-01 19:52:48.958426948 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 19:52:48.968705  179870 kubeconfig.go:125] found "multinode-949956" server: "https://192.168.67.2:8443"
	I1201 19:52:48.968732  179870 api_server.go:166] Checking apiserver status ...
	I1201 19:52:48.968762  179870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 19:52:48.979612  179870 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup
	W1201 19:52:48.987446  179870 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1201 19:52:48.987532  179870 ssh_runner.go:195] Run: ls
	I1201 19:52:48.990882  179870 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1201 19:52:48.994821  179870 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1201 19:52:48.994838  179870 status.go:463] multinode-949956 apiserver status = Running (err=<nil>)
	I1201 19:52:48.994846  179870 status.go:176] multinode-949956 status: &{Name:multinode-949956 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:52:48.994861  179870 status.go:174] checking status of multinode-949956-m02 ...
	I1201 19:52:48.995084  179870 cli_runner.go:164] Run: docker container inspect multinode-949956-m02 --format={{.State.Status}}
	I1201 19:52:49.012812  179870 status.go:371] multinode-949956-m02 host status = "Running" (err=<nil>)
	I1201 19:52:49.012833  179870 host.go:66] Checking if "multinode-949956-m02" exists ...
	I1201 19:52:49.013066  179870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-949956-m02
	I1201 19:52:49.030237  179870 host.go:66] Checking if "multinode-949956-m02" exists ...
	I1201 19:52:49.030500  179870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 19:52:49.030539  179870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-949956-m02
	I1201 19:52:49.047766  179870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21997-13091/.minikube/machines/multinode-949956-m02/id_rsa Username:docker}
	I1201 19:52:49.144413  179870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:52:49.156794  179870 status.go:176] multinode-949956-m02 status: &{Name:multinode-949956-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:52:49.156832  179870 status.go:174] checking status of multinode-949956-m03 ...
	I1201 19:52:49.157100  179870 cli_runner.go:164] Run: docker container inspect multinode-949956-m03 --format={{.State.Status}}
	I1201 19:52:49.175037  179870 status.go:371] multinode-949956-m03 host status = "Stopped" (err=<nil>)
	I1201 19:52:49.175060  179870 status.go:384] host is not running, skipping remaining checks
	I1201 19:52:49.175068  179870 status.go:176] multinode-949956-m03 status: &{Name:multinode-949956-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-949956 node start m03 -v=5 --alsologtostderr: (6.458870919s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-949956
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-949956
E1201 19:53:01.299514   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-949956: (29.507723302s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-949956 --wait=true -v=5 --alsologtostderr
E1201 19:54:04.746453   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-949956 --wait=true -v=5 --alsologtostderr: (49.813881888s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-949956
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-949956 node delete m03: (4.634442951s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-949956 stop: (30.1919661s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-949956 status: exit status 7 (97.076196ms)

                                                
                                                
-- stdout --
	multinode-949956
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-949956-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr: exit status 7 (92.416523ms)

                                                
                                                
-- stdout --
	multinode-949956
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-949956-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:54:51.338760  189752 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:54:51.339031  189752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:54:51.339042  189752 out.go:374] Setting ErrFile to fd 2...
	I1201 19:54:51.339046  189752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:54:51.339269  189752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:54:51.339437  189752 out.go:368] Setting JSON to false
	I1201 19:54:51.339462  189752 mustload.go:66] Loading cluster: multinode-949956
	I1201 19:54:51.339517  189752 notify.go:221] Checking for updates...
	I1201 19:54:51.339960  189752 config.go:182] Loaded profile config "multinode-949956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:54:51.339979  189752 status.go:174] checking status of multinode-949956 ...
	I1201 19:54:51.340521  189752 cli_runner.go:164] Run: docker container inspect multinode-949956 --format={{.State.Status}}
	I1201 19:54:51.358513  189752 status.go:371] multinode-949956 host status = "Stopped" (err=<nil>)
	I1201 19:54:51.358538  189752 status.go:384] host is not running, skipping remaining checks
	I1201 19:54:51.358546  189752 status.go:176] multinode-949956 status: &{Name:multinode-949956 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:54:51.358594  189752 status.go:174] checking status of multinode-949956-m02 ...
	I1201 19:54:51.358946  189752 cli_runner.go:164] Run: docker container inspect multinode-949956-m02 --format={{.State.Status}}
	I1201 19:54:51.376378  189752 status.go:371] multinode-949956-m02 host status = "Stopped" (err=<nil>)
	I1201 19:54:51.376400  189752 status.go:384] host is not running, skipping remaining checks
	I1201 19:54:51.376405  189752 status.go:176] multinode-949956-m02 status: &{Name:multinode-949956-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (25.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-949956 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-949956 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (24.581394714s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-949956 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (25.18s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-949956
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-949956-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-949956-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.589225ms)

                                                
                                                
-- stdout --
	* [multinode-949956-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-949956-m02' is duplicated with machine name 'multinode-949956-m02' in profile 'multinode-949956'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-949956-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-949956-m03 --driver=docker  --container-runtime=crio: (19.417719136s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-949956
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-949956: exit status 80 (290.857046ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-949956 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-949956-m03 already exists in multinode-949956-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-949956-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-949956-m03: (2.355650799s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.20s)

                                                
                                    
x
+
TestPreload (102.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-682503 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-682503 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (43.270451316s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-682503 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-682503 image pull gcr.io/k8s-minikube/busybox: (1.419987604s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-682503
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-682503: (6.094979759s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-682503 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1201 19:56:39.276388   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-682503 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.206974657s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-682503 image list
helpers_test.go:175: Cleaning up "test-preload-682503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-682503
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-682503: (2.351958012s)
--- PASS: TestPreload (102.58s)

                                                
                                    
x
+
TestScheduledStopUnix (98.83s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-052030 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-052030 --memory=3072 --driver=docker  --container-runtime=crio: (22.738587944s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-052030 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1201 19:57:48.349577  206687 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:57:48.349712  206687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:57:48.349724  206687 out.go:374] Setting ErrFile to fd 2...
	I1201 19:57:48.349730  206687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:57:48.349949  206687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:57:48.350213  206687 out.go:368] Setting JSON to false
	I1201 19:57:48.350333  206687 mustload.go:66] Loading cluster: scheduled-stop-052030
	I1201 19:57:48.350686  206687 config.go:182] Loaded profile config "scheduled-stop-052030": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:57:48.350768  206687 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/config.json ...
	I1201 19:57:48.350953  206687 mustload.go:66] Loading cluster: scheduled-stop-052030
	I1201 19:57:48.351082  206687 config.go:182] Loaded profile config "scheduled-stop-052030": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-052030 -n scheduled-stop-052030
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-052030 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1201 19:57:48.735943  206838 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:57:48.736184  206838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:57:48.736193  206838 out.go:374] Setting ErrFile to fd 2...
	I1201 19:57:48.736197  206838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:57:48.736398  206838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:57:48.736619  206838 out.go:368] Setting JSON to false
	I1201 19:57:48.736798  206838 daemonize_unix.go:73] killing process 206721 as it is an old scheduled stop
	I1201 19:57:48.736900  206838 mustload.go:66] Loading cluster: scheduled-stop-052030
	I1201 19:57:48.737230  206838 config.go:182] Loaded profile config "scheduled-stop-052030": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:57:48.737322  206838 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/config.json ...
	I1201 19:57:48.737514  206838 mustload.go:66] Loading cluster: scheduled-stop-052030
	I1201 19:57:48.737719  206838 config.go:182] Loaded profile config "scheduled-stop-052030": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1201 19:57:48.741921   16873 retry.go:31] will retry after 108.555µs: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.743050   16873 retry.go:31] will retry after 87.241µs: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.744196   16873 retry.go:31] will retry after 244.412µs: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.745338   16873 retry.go:31] will retry after 358.893µs: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.746457   16873 retry.go:31] will retry after 726.471µs: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.747571   16873 retry.go:31] will retry after 739.206µs: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.748682   16873 retry.go:31] will retry after 1.459956ms: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.750875   16873 retry.go:31] will retry after 2.082711ms: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.754068   16873 retry.go:31] will retry after 2.653346ms: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.757251   16873 retry.go:31] will retry after 5.698429ms: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.763455   16873 retry.go:31] will retry after 8.121796ms: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.772714   16873 retry.go:31] will retry after 11.71103ms: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.785534   16873 retry.go:31] will retry after 6.866938ms: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.792756   16873 retry.go:31] will retry after 25.180206ms: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.819002   16873 retry.go:31] will retry after 26.94808ms: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
I1201 19:57:48.846259   16873 retry.go:31] will retry after 49.669577ms: open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-052030 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1201 19:58:01.299362   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:58:02.345718   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-052030 -n scheduled-stop-052030
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-052030
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-052030 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1201 19:58:14.644492  207389 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:58:14.644769  207389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:58:14.644780  207389 out.go:374] Setting ErrFile to fd 2...
	I1201 19:58:14.644787  207389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:58:14.645016  207389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 19:58:14.645269  207389 out.go:368] Setting JSON to false
	I1201 19:58:14.645385  207389 mustload.go:66] Loading cluster: scheduled-stop-052030
	I1201 19:58:14.645715  207389 config.go:182] Loaded profile config "scheduled-stop-052030": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:58:14.645809  207389 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/scheduled-stop-052030/config.json ...
	I1201 19:58:14.646007  207389 mustload.go:66] Loading cluster: scheduled-stop-052030
	I1201 19:58:14.646142  207389 config.go:182] Loaded profile config "scheduled-stop-052030": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-052030
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-052030: exit status 7 (78.116226ms)

                                                
                                                
-- stdout --
	scheduled-stop-052030
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-052030 -n scheduled-stop-052030
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-052030 -n scheduled-stop-052030: exit status 7 (76.611966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-052030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-052030
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-052030: (4.577340462s)
--- PASS: TestScheduledStopUnix (98.83s)

                                                
                                    
x
+
TestInsufficientStorage (11.8s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-784815 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1201 19:59:04.746335   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-784815 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.313173677s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd05e153-1311-44f9-80f7-94e717997f9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-784815] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"95771451-f1cb-421b-b743-9b1c6300fe77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"594b0dee-cf67-4c70-9904-3063bebf4672","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"327708d4-49c4-4d1a-8fb7-2c5d04c7f2f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig"}}
	{"specversion":"1.0","id":"cb4592fe-82ff-403f-8063-2c6b9cd571b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube"}}
	{"specversion":"1.0","id":"594ad735-e030-47cf-bd4f-099fffa437ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"85942349-5407-48b9-8ff6-d9c72d12cdb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8bd2e685-2a7f-4194-80ed-b1ec8e197949","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"48066f4f-186e-412c-a005-4ee323a678db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0961f60c-7769-41eb-ac16-2afb96cf9be7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a2288284-a847-43b2-908d-e8de8b16a2a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"28d15018-8880-43d7-8630-90c281533418","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-784815\" primary control-plane node in \"insufficient-storage-784815\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e1c9dcb-35d0-40a8-9081-ae1a07e63587","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ae6a7b9-db01-4f75-9e90-8a9bbe50b929","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"05c7c4bc-6b47-431a-a2a4-5b22ed481e9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-784815 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-784815 --output=json --layout=cluster: exit status 7 (294.307868ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-784815","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-784815","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1201 19:59:13.979004  209914 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-784815" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-784815 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-784815 --output=json --layout=cluster: exit status 7 (289.746732ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-784815","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-784815","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1201 19:59:14.269435  210026 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-784815" does not appear in /home/jenkins/minikube-integration/21997-13091/kubeconfig
	E1201 19:59:14.279364  210026 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/insufficient-storage-784815/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-784815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-784815
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-784815: (1.900611717s)
--- PASS: TestInsufficientStorage (11.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2781734979 start -p running-upgrade-716953 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2781734979 start -p running-upgrade-716953 --memory=3072 --vm-driver=docker  --container-runtime=crio: (45.250312963s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-716953 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-716953 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.6079735s)
helpers_test.go:175: Cleaning up "running-upgrade-716953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-716953
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-716953: (2.508152032s)
--- PASS: TestRunningBinaryUpgrade (69.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (330.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-189963 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-189963 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.804070982s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-189963
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-189963: (2.441428126s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-189963 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-189963 status --format={{.Host}}: exit status 7 (103.00542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-189963 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-189963 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m53.245833641s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-189963 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-189963 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-189963 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (115.810371ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-189963] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-189963
	    minikube start -p kubernetes-upgrade-189963 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1899632 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-189963 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-189963 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-189963 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.735424027s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-189963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-189963
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-189963: (2.712491438s)
--- PASS: TestKubernetesUpgrade (330.25s)

                                                
                                    
x
+
TestMissingContainerUpgrade (83.43s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1267341296 start -p missing-upgrade-675228 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1267341296 start -p missing-upgrade-675228 --memory=3072 --driver=docker  --container-runtime=crio: (42.736245264s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-675228
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-675228
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-675228 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-675228 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.457818339s)
helpers_test.go:175: Cleaning up "missing-upgrade-675228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-675228
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-675228: (3.661608568s)
--- PASS: TestMissingContainerUpgrade (83.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-684883 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-684883 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (82.988866ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-684883] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-684883 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-684883 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.818824545s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-684883 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-684883 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-684883 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.051138952s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-684883 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-684883 status -o json: exit status 2 (303.132232ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-684883","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-684883
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-684883: (1.990158441s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-684883 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-684883 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.582280533s)
--- PASS: TestNoKubernetes/serial/Start (7.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21997-13091/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-684883 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-684883 "sudo systemctl is-active --quiet service kubelet": exit status 1 (343.816061ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.628555331s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-551864 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-551864 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (167.194018ms)

                                                
                                                
-- stdout --
	* [false-551864] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:00:24.922670  229911 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:00:24.922978  229911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:00:24.922988  229911 out.go:374] Setting ErrFile to fd 2...
	I1201 20:00:24.922993  229911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:00:24.923211  229911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-13091/.minikube/bin
	I1201 20:00:24.923730  229911 out.go:368] Setting JSON to false
	I1201 20:00:24.924815  229911 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6176,"bootTime":1764613049,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:00:24.924871  229911 start.go:143] virtualization: kvm guest
	I1201 20:00:24.927036  229911 out.go:179] * [false-551864] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:00:24.928336  229911 notify.go:221] Checking for updates...
	I1201 20:00:24.928350  229911 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:00:24.929680  229911 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:00:24.931006  229911 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-13091/kubeconfig
	I1201 20:00:24.932216  229911 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-13091/.minikube
	I1201 20:00:24.933372  229911 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:00:24.934687  229911 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:00:24.936386  229911 config.go:182] Loaded profile config "NoKubernetes-684883": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1201 20:00:24.936493  229911 config.go:182] Loaded profile config "missing-upgrade-675228": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1201 20:00:24.936575  229911 config.go:182] Loaded profile config "running-upgrade-716953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1201 20:00:24.936651  229911 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:00:24.961138  229911 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1201 20:00:24.961217  229911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1201 20:00:25.022801  229911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-01 20:00:25.012036258 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1201 20:00:25.022918  229911 docker.go:319] overlay module found
	I1201 20:00:25.025333  229911 out.go:179] * Using the docker driver based on user configuration
	I1201 20:00:25.026651  229911 start.go:309] selected driver: docker
	I1201 20:00:25.026667  229911 start.go:927] validating driver "docker" against <nil>
	I1201 20:00:25.026678  229911 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:00:25.028551  229911 out.go:203] 
	W1201 20:00:25.029699  229911 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1201 20:00:25.030863  229911 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-551864 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-551864

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-551864

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-551864

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-551864

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-551864

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-551864

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-551864

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-551864

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-551864

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-551864

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-551864

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-551864" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-551864" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Dec 2025 19:59:58 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-675228
contexts:
- context:
cluster: missing-upgrade-675228
extensions:
- extension:
last-update: Mon, 01 Dec 2025 19:59:58 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: missing-upgrade-675228
name: missing-upgrade-675228
current-context: ""
kind: Config
users:
- name: missing-upgrade-675228
user:
client-certificate: /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/missing-upgrade-675228/client.crt
client-key: /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/missing-upgrade-675228/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-551864

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-551864"

                                                
                                                
----------------------- debugLogs end: false-551864 [took: 4.537676659s] --------------------------------
helpers_test.go:175: Cleaning up "false-551864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-551864
--- PASS: TestNetworkPlugins/group/false (5.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-684883
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-684883: (1.346197659s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-684883 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-684883 --driver=docker  --container-runtime=crio: (6.786740886s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-684883 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-684883 "sudo systemctl is-active --quiet service kubelet": exit status 1 (318.026916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestPause/serial/Start (44.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-138480 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-138480 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.439285498s)
--- PASS: TestPause/serial/Start (44.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (285.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2713403448 start -p stopped-upgrade-533630 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2713403448 start -p stopped-upgrade-533630 --memory=3072 --vm-driver=docker  --container-runtime=crio: (22.045848676s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2713403448 -p stopped-upgrade-533630 stop
E1201 20:01:39.276983   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2713403448 -p stopped-upgrade-533630 stop: (3.200926844s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-533630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-533630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m20.240875576s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (285.49s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.9s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-138480 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-138480 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.89068148s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1201 20:02:07.813050   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.204303454s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-551864 "pgrep -a kubelet"
I1201 20:02:41.816616   16873 config.go:182] Loaded profile config "auto-551864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-551864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gfdgc" [fe5adc6d-dfe9-4c84-9000-a8c0f452fe25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1201 20:02:44.371258   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/addons-844427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-gfdgc" [fe5adc6d-dfe9-4c84-9000-a8c0f452fe25] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003296601s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-551864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.614931138s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-ntrsz" [9f41574d-dc47-43ed-b09e-19acebf1f437] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.002849528s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-551864 "pgrep -a kubelet"
I1201 20:03:58.789164   16873 config.go:182] Loaded profile config "kindnet-551864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-551864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-chrk7" [ecf8a5ec-aac1-4d91-bc6b-c6a4a9f44dd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-chrk7" [ecf8a5ec-aac1-4d91-bc6b-c6a4a9f44dd3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.003814572s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-551864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (47.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (47.980925536s)
--- PASS: TestNetworkPlugins/group/calico/Start (47.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.639294652s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-vcl7h" [a7f0d22f-e0e2-4ccf-999d-fe6d5c943439] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-vcl7h" [a7f0d22f-e0e2-4ccf-999d-fe6d5c943439] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004246229s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-551864 "pgrep -a kubelet"
I1201 20:05:05.717282   16873 config.go:182] Loaded profile config "calico-551864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-551864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rlc7p" [cccd0a41-2930-457e-a0a2-2cc5b9876c8d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rlc7p" [cccd0a41-2930-457e-a0a2-2cc5b9876c8d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.00428772s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-551864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-551864 "pgrep -a kubelet"
I1201 20:05:19.189107   16873 config.go:182] Loaded profile config "custom-flannel-551864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-551864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v2bvq" [fda80afc-12ea-44e1-bb74-d940d63b52f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-v2bvq" [fda80afc-12ea-44e1-bb74-d940d63b52f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.002799772s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-551864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m1.447104586s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (54.010444036s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-533630
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-533630: (1.094307861s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-551864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m10.258293841s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-217464 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-217464 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.692887556s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-551864 "pgrep -a kubelet"
I1201 20:06:36.810026   16873 config.go:182] Loaded profile config "enable-default-cni-551864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-551864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9m56q" [ef647b2d-387f-4fb9-bd78-755a3b76b04b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1201 20:06:39.276649   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-415638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-9m56q" [ef647b2d-387f-4fb9-bd78-755a3b76b04b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004335712s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-w6pfq" [dadad9c3-57e7-4d37-bd6c-77419fec97b9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003721817s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-551864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-551864 "pgrep -a kubelet"
I1201 20:06:48.877599   16873 config.go:182] Loaded profile config "flannel-551864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-551864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mqz2w" [97e7f6c1-9038-4be8-9f83-a9b9f2da5d0c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mqz2w" [97e7f6c1-9038-4be8-9f83-a9b9f2da5d0c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003228397s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-551864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (46.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (46.213212901s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (46.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-551864 "pgrep -a kubelet"
I1201 20:07:13.790886   16873 config.go:182] Loaded profile config "bridge-551864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-551864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9pcck" [6726f031-e28b-46e0-9660-e851bc18c825] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9pcck" [6726f031-e28b-46e0-9660-e851bc18c825] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003892474s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-217464 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [37d188bf-79e8-4b6f-bbfd-3889f55ecfbd] Pending
helpers_test.go:352: "busybox" [37d188bf-79e8-4b6f-bbfd-3889f55ecfbd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [37d188bf-79e8-4b6f-bbfd-3889f55ecfbd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004760776s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-217464 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (40.978067203s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-551864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-551864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (18.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-217464 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-217464 --alsologtostderr -v=3: (18.282980521s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (18.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1201 20:07:44.555421   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/auto-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:07:47.117262   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/auto-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (43.652345221s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-217464 -n old-k8s-version-217464
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-217464 -n old-k8s-version-217464: exit status 7 (109.928274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-217464 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (25.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-217464 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-217464 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (25.125034327s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-217464 -n old-k8s-version-217464
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (25.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-240359 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1c152aeb-d4c6-436a-96ef-96d8dff15eba] Pending
E1201 20:07:52.239393   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/auto-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [1c152aeb-d4c6-436a-96ef-96d8dff15eba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1c152aeb-d4c6-436a-96ef-96d8dff15eba] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003921094s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-240359 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-990820 create -f testdata/busybox.yaml
E1201 20:08:02.481665   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/auto-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [35055282-a717-479f-9f0b-454c174c024e] Pending
helpers_test.go:352: "busybox" [35055282-a717-479f-9f0b-454c174c024e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [35055282-a717-479f-9f0b-454c174c024e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003765515s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-990820 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-240359 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-240359 --alsologtostderr -v=3: (18.62610543s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-990820 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-990820 --alsologtostderr -v=3: (16.340557365s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pfxh9" [c9b2eed4-9d0b-4f54-8c25-d864a3b6f855] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pfxh9" [c9b2eed4-9d0b-4f54-8c25-d864a3b6f855] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.003344118s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-240359 -n no-preload-240359
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-240359 -n no-preload-240359: exit status 7 (77.990691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-240359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (43.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-240359 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (43.426419923s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-240359 -n no-preload-240359
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (43.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pfxh9" [c9b2eed4-9d0b-4f54-8c25-d864a3b6f855] Running
E1201 20:08:22.963933   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/auto-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003976405s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-217464 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-009682 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0515a30f-e9f6-4729-b544-8ee69479d1f4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0515a30f-e9f6-4729-b544-8ee69479d1f4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.007884339s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-009682 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-217464 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-990820 -n embed-certs-990820
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-990820 -n embed-certs-990820: exit status 7 (107.877531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-990820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-990820 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (46.754747404s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-990820 -n embed-certs-990820
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (32.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (32.76921134s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (32.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (17.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-009682 --alsologtostderr -v=3
E1201 20:08:52.507526   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:08:52.513877   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:08:52.525240   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:08:52.546629   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:08:52.588455   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:08:52.669902   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:08:52.831371   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:08:53.153071   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:08:53.794340   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:08:55.075755   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-009682 --alsologtostderr -v=3: (17.83573889s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (17.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682: exit status 7 (80.891601ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-009682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1201 20:08:57.637796   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:09:02.760094   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:09:03.926194   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/auto-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:09:04.746490   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/functional-764481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-009682 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (48.384303431s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-009682 -n default-k8s-diff-port-009682
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-f7grf" [284ddeff-9e83-45dc-b944-445b10dbc83e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004816106s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-f7grf" [284ddeff-9e83-45dc-b944-445b10dbc83e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004063817s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-240359 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-456990 --alsologtostderr -v=3
E1201 20:09:13.001707   16873 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/kindnet-551864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-456990 --alsologtostderr -v=3: (8.719917334s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k848d" [952cb839-6bc6-4cf2-89cb-692e9df1ef2d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003354805s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-240359 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k848d" [952cb839-6bc6-4cf2-89cb-692e9df1ef2d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003706171s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-990820 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-456990 -n newest-cni-456990
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-456990 -n newest-cni-456990: exit status 7 (84.886902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-456990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-456990 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (10.085706449s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-456990 -n newest-cni-456990
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-990820 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-456990 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s6hvn" [cc861f8f-612e-438c-af44-6b614122609d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003740369s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s6hvn" [cc861f8f-612e-438c-af44-6b614122609d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003012029s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-009682 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-009682 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (33/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.06
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
137 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
368 TestNetworkPlugins/group/kubenet 4.11
381 TestNetworkPlugins/group/cilium 4.38
388 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1201 19:05:57.202037   16873 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1201 19:05:57.250062   16873 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
W1201 19:05:57.264064   16873 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-551864 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-551864

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-551864

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-551864

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-551864

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-551864

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-551864

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-551864

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-551864

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-551864

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-551864

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-551864

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-551864" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-551864" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Dec 2025 19:59:58 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-675228
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Dec 2025 20:00:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-716953
contexts:
- context:
cluster: missing-upgrade-675228
extensions:
- extension:
last-update: Mon, 01 Dec 2025 19:59:58 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: missing-upgrade-675228
name: missing-upgrade-675228
- context:
cluster: running-upgrade-716953
extensions:
- extension:
last-update: Mon, 01 Dec 2025 20:00:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: running-upgrade-716953
name: running-upgrade-716953
current-context: running-upgrade-716953
kind: Config
users:
- name: missing-upgrade-675228
user:
client-certificate: /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/missing-upgrade-675228/client.crt
client-key: /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/missing-upgrade-675228/client.key
- name: running-upgrade-716953
user:
client-certificate: /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/running-upgrade-716953/client.crt
client-key: /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/running-upgrade-716953/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-551864

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-551864"

                                                
                                                
----------------------- debugLogs end: kubenet-551864 [took: 3.936069872s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-551864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-551864
--- SKIP: TestNetworkPlugins/group/kubenet (4.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-551864 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-551864" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-13091/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Dec 2025 19:59:58 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-675228
contexts:
- context:
cluster: missing-upgrade-675228
extensions:
- extension:
last-update: Mon, 01 Dec 2025 19:59:58 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: missing-upgrade-675228
name: missing-upgrade-675228
current-context: ""
kind: Config
users:
- name: missing-upgrade-675228
user:
client-certificate: /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/missing-upgrade-675228/client.crt
client-key: /home/jenkins/minikube-integration/21997-13091/.minikube/profiles/missing-upgrade-675228/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-551864

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-551864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-551864"

                                                
                                                
----------------------- debugLogs end: cilium-551864 [took: 4.175882467s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-551864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-551864
--- SKIP: TestNetworkPlugins/group/cilium (4.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-003720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-003720
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard